Dec 09 14:51:34 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 09 14:51:34 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 09 14:51:34 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 14:51:34 localhost kernel: BIOS-provided physical RAM map:
Dec 09 14:51:34 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 09 14:51:34 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 09 14:51:34 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 09 14:51:34 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 09 14:51:34 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 09 14:51:34 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 09 14:51:34 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 09 14:51:34 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 09 14:51:34 localhost kernel: NX (Execute Disable) protection: active
Dec 09 14:51:34 localhost kernel: APIC: Static calls initialized
Dec 09 14:51:34 localhost kernel: SMBIOS 2.8 present.
Dec 09 14:51:34 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 09 14:51:34 localhost kernel: Hypervisor detected: KVM
Dec 09 14:51:34 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 09 14:51:34 localhost kernel: kvm-clock: using sched offset of 3051190111 cycles
Dec 09 14:51:34 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 09 14:51:34 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 09 14:51:34 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 09 14:51:34 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 09 14:51:34 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 09 14:51:34 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 09 14:51:34 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 09 14:51:34 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 09 14:51:34 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 09 14:51:34 localhost kernel: Using GB pages for direct mapping
Dec 09 14:51:34 localhost kernel: RAMDISK: [mem 0x2e955000-0x334a2fff]
Dec 09 14:51:34 localhost kernel: ACPI: Early table checksum verification disabled
Dec 09 14:51:34 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 09 14:51:34 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 14:51:34 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 14:51:34 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 14:51:34 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 09 14:51:34 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 14:51:34 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 14:51:34 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 09 14:51:34 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 09 14:51:34 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 09 14:51:34 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 09 14:51:34 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 09 14:51:34 localhost kernel: No NUMA configuration found
Dec 09 14:51:34 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 09 14:51:34 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec 09 14:51:34 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 09 14:51:34 localhost kernel: Zone ranges:
Dec 09 14:51:34 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 09 14:51:34 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 09 14:51:34 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 09 14:51:34 localhost kernel:   Device   empty
Dec 09 14:51:34 localhost kernel: Movable zone start for each node
Dec 09 14:51:34 localhost kernel: Early memory node ranges
Dec 09 14:51:34 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 09 14:51:34 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 09 14:51:34 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 09 14:51:34 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 09 14:51:34 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 09 14:51:34 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 09 14:51:34 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 09 14:51:34 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 09 14:51:34 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 09 14:51:34 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 09 14:51:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 09 14:51:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 09 14:51:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 09 14:51:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 09 14:51:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 09 14:51:34 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 09 14:51:34 localhost kernel: TSC deadline timer available
Dec 09 14:51:34 localhost kernel: CPU topo: Max. logical packages:   8
Dec 09 14:51:34 localhost kernel: CPU topo: Max. logical dies:       8
Dec 09 14:51:34 localhost kernel: CPU topo: Max. dies per package:   1
Dec 09 14:51:34 localhost kernel: CPU topo: Max. threads per core:   1
Dec 09 14:51:34 localhost kernel: CPU topo: Num. cores per package:     1
Dec 09 14:51:34 localhost kernel: CPU topo: Num. threads per package:   1
Dec 09 14:51:34 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 09 14:51:34 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 09 14:51:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 09 14:51:34 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 09 14:51:34 localhost kernel: Booting paravirtualized kernel on KVM
Dec 09 14:51:34 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 09 14:51:34 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 09 14:51:34 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 09 14:51:34 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 09 14:51:34 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 09 14:51:34 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 09 14:51:34 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 14:51:34 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 09 14:51:34 localhost kernel: random: crng init done
Dec 09 14:51:34 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 09 14:51:34 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 09 14:51:34 localhost kernel: Fallback order for Node 0: 0 
Dec 09 14:51:34 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 09 14:51:34 localhost kernel: Policy zone: Normal
Dec 09 14:51:34 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 09 14:51:34 localhost kernel: software IO TLB: area num 8.
Dec 09 14:51:34 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 09 14:51:34 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 09 14:51:34 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 09 14:51:34 localhost kernel: Dynamic Preempt: voluntary
Dec 09 14:51:34 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 09 14:51:34 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 09 14:51:34 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 09 14:51:34 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 09 14:51:34 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 09 14:51:34 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 09 14:51:34 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 09 14:51:34 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 09 14:51:34 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 14:51:34 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 14:51:34 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 14:51:34 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 09 14:51:34 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 09 14:51:34 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 09 14:51:34 localhost kernel: Console: colour VGA+ 80x25
Dec 09 14:51:34 localhost kernel: printk: console [ttyS0] enabled
Dec 09 14:51:34 localhost kernel: ACPI: Core revision 20230331
Dec 09 14:51:34 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 09 14:51:34 localhost kernel: x2apic enabled
Dec 09 14:51:34 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 09 14:51:34 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 09 14:51:34 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 09 14:51:34 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 09 14:51:34 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 09 14:51:34 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 09 14:51:34 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 09 14:51:34 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 09 14:51:34 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 09 14:51:34 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 09 14:51:34 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 09 14:51:34 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 09 14:51:34 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 09 14:51:34 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 09 14:51:34 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 09 14:51:34 localhost kernel: x86/bugs: return thunk changed
Dec 09 14:51:34 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 09 14:51:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 09 14:51:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 09 14:51:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 09 14:51:34 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 09 14:51:34 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 09 14:51:34 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 09 14:51:34 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 09 14:51:34 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 09 14:51:34 localhost kernel: landlock: Up and running.
Dec 09 14:51:34 localhost kernel: Yama: becoming mindful.
Dec 09 14:51:34 localhost kernel: SELinux:  Initializing.
Dec 09 14:51:34 localhost kernel: LSM support for eBPF active
Dec 09 14:51:34 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 09 14:51:34 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 09 14:51:34 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 09 14:51:34 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 09 14:51:34 localhost kernel: ... version:                0
Dec 09 14:51:34 localhost kernel: ... bit width:              48
Dec 09 14:51:34 localhost kernel: ... generic registers:      6
Dec 09 14:51:34 localhost kernel: ... value mask:             0000ffffffffffff
Dec 09 14:51:34 localhost kernel: ... max period:             00007fffffffffff
Dec 09 14:51:34 localhost kernel: ... fixed-purpose events:   0
Dec 09 14:51:34 localhost kernel: ... event mask:             000000000000003f
Dec 09 14:51:34 localhost kernel: signal: max sigframe size: 1776
Dec 09 14:51:34 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 09 14:51:34 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 09 14:51:34 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 09 14:51:34 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 09 14:51:34 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 09 14:51:34 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 09 14:51:34 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 09 14:51:34 localhost kernel: node 0 deferred pages initialised in 9ms
Dec 09 14:51:34 localhost kernel: Memory: 7774724K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 607520K reserved, 0K cma-reserved)
Dec 09 14:51:34 localhost kernel: devtmpfs: initialized
Dec 09 14:51:34 localhost kernel: x86/mm: Memory block size: 128MB
Dec 09 14:51:34 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 09 14:51:34 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 09 14:51:34 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 09 14:51:34 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 09 14:51:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 09 14:51:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 09 14:51:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 09 14:51:34 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 09 14:51:34 localhost kernel: audit: type=2000 audit(1765291892.109:1): state=initialized audit_enabled=0 res=1
Dec 09 14:51:34 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 09 14:51:34 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 09 14:51:34 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 09 14:51:34 localhost kernel: cpuidle: using governor menu
Dec 09 14:51:34 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 09 14:51:34 localhost kernel: PCI: Using configuration type 1 for base access
Dec 09 14:51:34 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 09 14:51:34 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 09 14:51:34 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 09 14:51:34 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 09 14:51:34 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 09 14:51:34 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 09 14:51:34 localhost kernel: Demotion targets for Node 0: null
Dec 09 14:51:34 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 09 14:51:34 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 09 14:51:34 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 09 14:51:34 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 09 14:51:34 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 09 14:51:34 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 09 14:51:34 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 09 14:51:34 localhost kernel: ACPI: Interpreter enabled
Dec 09 14:51:34 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 09 14:51:34 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 09 14:51:34 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 09 14:51:34 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 09 14:51:34 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 09 14:51:34 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 09 14:51:34 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [3] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [4] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [5] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [6] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [7] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [8] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [9] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [10] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [11] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [12] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [13] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [14] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [15] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [16] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [17] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [18] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [19] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [20] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [21] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [22] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [23] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [24] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [25] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [26] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [27] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [28] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [29] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [30] registered
Dec 09 14:51:34 localhost kernel: acpiphp: Slot [31] registered
Dec 09 14:51:34 localhost kernel: PCI host bridge to bus 0000:00
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 09 14:51:34 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 09 14:51:34 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 09 14:51:34 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 09 14:51:34 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 09 14:51:34 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 09 14:51:34 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 09 14:51:34 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 09 14:51:34 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 09 14:51:34 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 09 14:51:34 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 09 14:51:34 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 09 14:51:34 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 09 14:51:34 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 09 14:51:34 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 09 14:51:34 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 09 14:51:34 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 09 14:51:34 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 09 14:51:34 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 09 14:51:34 localhost kernel: iommu: Default domain type: Translated
Dec 09 14:51:34 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 09 14:51:34 localhost kernel: SCSI subsystem initialized
Dec 09 14:51:34 localhost kernel: ACPI: bus type USB registered
Dec 09 14:51:34 localhost kernel: usbcore: registered new interface driver usbfs
Dec 09 14:51:34 localhost kernel: usbcore: registered new interface driver hub
Dec 09 14:51:34 localhost kernel: usbcore: registered new device driver usb
Dec 09 14:51:34 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 09 14:51:34 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 09 14:51:34 localhost kernel: PTP clock support registered
Dec 09 14:51:34 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 09 14:51:34 localhost kernel: NetLabel: Initializing
Dec 09 14:51:34 localhost kernel: NetLabel:  domain hash size = 128
Dec 09 14:51:34 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 09 14:51:34 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 09 14:51:34 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 09 14:51:34 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 09 14:51:34 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 09 14:51:34 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 09 14:51:34 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 09 14:51:34 localhost kernel: vgaarb: loaded
Dec 09 14:51:34 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 09 14:51:34 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 09 14:51:34 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 09 14:51:34 localhost kernel: pnp: PnP ACPI init
Dec 09 14:51:34 localhost kernel: pnp 00:03: [dma 2]
Dec 09 14:51:34 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 09 14:51:34 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 09 14:51:34 localhost kernel: NET: Registered PF_INET protocol family
Dec 09 14:51:34 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 09 14:51:34 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 09 14:51:34 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 09 14:51:34 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 09 14:51:34 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 09 14:51:34 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 09 14:51:34 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 09 14:51:34 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 09 14:51:34 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 09 14:51:34 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 09 14:51:34 localhost kernel: NET: Registered PF_XDP protocol family
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 09 14:51:34 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 09 14:51:34 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 09 14:51:34 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 09 14:51:34 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73758 usecs
Dec 09 14:51:34 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 09 14:51:34 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 09 14:51:34 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 09 14:51:34 localhost kernel: ACPI: bus type thunderbolt registered
Dec 09 14:51:34 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 09 14:51:34 localhost kernel: Initialise system trusted keyrings
Dec 09 14:51:34 localhost kernel: Key type blacklist registered
Dec 09 14:51:34 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 09 14:51:34 localhost kernel: zbud: loaded
Dec 09 14:51:34 localhost kernel: integrity: Platform Keyring initialized
Dec 09 14:51:34 localhost kernel: integrity: Machine keyring initialized
Dec 09 14:51:34 localhost kernel: Freeing initrd memory: 77112K
Dec 09 14:51:34 localhost kernel: NET: Registered PF_ALG protocol family
Dec 09 14:51:34 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 09 14:51:34 localhost kernel: Key type asymmetric registered
Dec 09 14:51:34 localhost kernel: Asymmetric key parser 'x509' registered
Dec 09 14:51:34 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 09 14:51:34 localhost kernel: io scheduler mq-deadline registered
Dec 09 14:51:34 localhost kernel: io scheduler kyber registered
Dec 09 14:51:34 localhost kernel: io scheduler bfq registered
Dec 09 14:51:34 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 09 14:51:34 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 09 14:51:34 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 09 14:51:34 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 09 14:51:34 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 09 14:51:34 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 09 14:51:34 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 09 14:51:34 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 09 14:51:34 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 09 14:51:34 localhost kernel: Non-volatile memory driver v1.3
Dec 09 14:51:34 localhost kernel: rdac: device handler registered
Dec 09 14:51:34 localhost kernel: hp_sw: device handler registered
Dec 09 14:51:34 localhost kernel: emc: device handler registered
Dec 09 14:51:34 localhost kernel: alua: device handler registered
Dec 09 14:51:34 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 09 14:51:34 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 09 14:51:34 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 09 14:51:34 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 09 14:51:34 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 09 14:51:34 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 09 14:51:34 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 09 14:51:34 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 09 14:51:34 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 09 14:51:34 localhost kernel: hub 1-0:1.0: USB hub found
Dec 09 14:51:34 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 09 14:51:34 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 09 14:51:34 localhost kernel: usbserial: USB Serial support registered for generic
Dec 09 14:51:34 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 09 14:51:34 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 09 14:51:34 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 09 14:51:34 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 09 14:51:34 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 09 14:51:34 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 09 14:51:34 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 09 14:51:34 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-09T14:51:33 UTC (1765291893)
Dec 09 14:51:34 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 09 14:51:34 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 09 14:51:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 09 14:51:34 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 09 14:51:34 localhost kernel: usbcore: registered new interface driver usbhid
Dec 09 14:51:34 localhost kernel: usbhid: USB HID core driver
Dec 09 14:51:34 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 09 14:51:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 09 14:51:34 localhost kernel: Initializing XFRM netlink socket
Dec 09 14:51:34 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 09 14:51:34 localhost kernel: Segment Routing with IPv6
Dec 09 14:51:34 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 09 14:51:34 localhost kernel: mpls_gso: MPLS GSO support
Dec 09 14:51:34 localhost kernel: IPI shorthand broadcast: enabled
Dec 09 14:51:34 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 09 14:51:34 localhost kernel: AES CTR mode by8 optimization enabled
Dec 09 14:51:34 localhost kernel: sched_clock: Marking stable (1178002509, 159847070)->(1457571639, -119722060)
Dec 09 14:51:34 localhost kernel: registered taskstats version 1
Dec 09 14:51:34 localhost kernel: Loading compiled-in X.509 certificates
Dec 09 14:51:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 09 14:51:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 09 14:51:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 09 14:51:34 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 09 14:51:34 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 09 14:51:34 localhost kernel: Demotion targets for Node 0: null
Dec 09 14:51:34 localhost kernel: page_owner is disabled
Dec 09 14:51:34 localhost kernel: Key type .fscrypt registered
Dec 09 14:51:34 localhost kernel: Key type fscrypt-provisioning registered
Dec 09 14:51:34 localhost kernel: Key type big_key registered
Dec 09 14:51:34 localhost kernel: Key type encrypted registered
Dec 09 14:51:34 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 09 14:51:34 localhost kernel: Loading compiled-in module X.509 certificates
Dec 09 14:51:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 09 14:51:34 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 09 14:51:34 localhost kernel: ima: No architecture policies found
Dec 09 14:51:34 localhost kernel: evm: Initialising EVM extended attributes:
Dec 09 14:51:34 localhost kernel: evm: security.selinux
Dec 09 14:51:34 localhost kernel: evm: security.SMACK64 (disabled)
Dec 09 14:51:34 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 09 14:51:34 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 09 14:51:34 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 09 14:51:34 localhost kernel: evm: security.apparmor (disabled)
Dec 09 14:51:34 localhost kernel: evm: security.ima
Dec 09 14:51:34 localhost kernel: evm: security.capability
Dec 09 14:51:34 localhost kernel: evm: HMAC attrs: 0x1
Dec 09 14:51:34 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 09 14:51:34 localhost kernel: Running certificate verification RSA selftest
Dec 09 14:51:34 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 09 14:51:34 localhost kernel: Running certificate verification ECDSA selftest
Dec 09 14:51:34 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 09 14:51:34 localhost kernel: clk: Disabling unused clocks
Dec 09 14:51:34 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 09 14:51:34 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 09 14:51:34 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 09 14:51:34 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 09 14:51:34 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 09 14:51:34 localhost kernel: Run /init as init process
Dec 09 14:51:34 localhost kernel:   with arguments:
Dec 09 14:51:34 localhost kernel:     /init
Dec 09 14:51:34 localhost kernel:   with environment:
Dec 09 14:51:34 localhost kernel:     HOME=/
Dec 09 14:51:34 localhost kernel:     TERM=linux
Dec 09 14:51:34 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 09 14:51:34 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 09 14:51:34 localhost systemd[1]: Detected virtualization kvm.
Dec 09 14:51:34 localhost systemd[1]: Detected architecture x86-64.
Dec 09 14:51:34 localhost systemd[1]: Running in initrd.
Dec 09 14:51:34 localhost systemd[1]: No hostname configured, using default hostname.
Dec 09 14:51:34 localhost systemd[1]: Hostname set to <localhost>.
Dec 09 14:51:34 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 09 14:51:34 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 09 14:51:34 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 09 14:51:34 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 09 14:51:34 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 09 14:51:34 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 09 14:51:34 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 09 14:51:34 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 09 14:51:34 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 09 14:51:34 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 09 14:51:34 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 09 14:51:34 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 09 14:51:34 localhost systemd[1]: Reached target Local File Systems.
Dec 09 14:51:34 localhost systemd[1]: Reached target Path Units.
Dec 09 14:51:34 localhost systemd[1]: Reached target Slice Units.
Dec 09 14:51:34 localhost systemd[1]: Reached target Swaps.
Dec 09 14:51:34 localhost systemd[1]: Reached target Timer Units.
Dec 09 14:51:34 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 09 14:51:34 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 09 14:51:34 localhost systemd[1]: Listening on Journal Socket.
Dec 09 14:51:34 localhost systemd[1]: Listening on udev Control Socket.
Dec 09 14:51:34 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 09 14:51:34 localhost systemd[1]: Reached target Socket Units.
Dec 09 14:51:34 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 09 14:51:34 localhost systemd[1]: Starting Journal Service...
Dec 09 14:51:34 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 09 14:51:34 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 09 14:51:34 localhost systemd[1]: Starting Create System Users...
Dec 09 14:51:34 localhost systemd[1]: Starting Setup Virtual Console...
Dec 09 14:51:34 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 09 14:51:34 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 09 14:51:34 localhost systemd[1]: Finished Create System Users.
Dec 09 14:51:34 localhost systemd-journald[307]: Journal started
Dec 09 14:51:34 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/e88682e8711b4c739c891e4f2bbcc348) is 8.0M, max 153.6M, 145.6M free.
Dec 09 14:51:34 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Dec 09 14:51:34 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Dec 09 14:51:34 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 09 14:51:34 localhost systemd[1]: Started Journal Service.
Dec 09 14:51:34 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 09 14:51:34 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 09 14:51:34 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 09 14:51:34 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 09 14:51:34 localhost systemd[1]: Finished Setup Virtual Console.
Dec 09 14:51:34 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 09 14:51:34 localhost systemd[1]: Starting dracut cmdline hook...
Dec 09 14:51:34 localhost dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Dec 09 14:51:34 localhost dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 14:51:34 localhost systemd[1]: Finished dracut cmdline hook.
Dec 09 14:51:34 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 09 14:51:34 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 09 14:51:34 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 09 14:51:34 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 09 14:51:34 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 09 14:51:34 localhost kernel: RPC: Registered udp transport module.
Dec 09 14:51:34 localhost kernel: RPC: Registered tcp transport module.
Dec 09 14:51:34 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 09 14:51:34 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 09 14:51:34 localhost rpc.statd[447]: Version 2.5.4 starting
Dec 09 14:51:34 localhost rpc.statd[447]: Initializing NSM state
Dec 09 14:51:34 localhost rpc.idmapd[452]: Setting log level to 0
Dec 09 14:51:34 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 09 14:51:34 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 09 14:51:34 localhost systemd-udevd[465]: Using default interface naming scheme 'rhel-9.0'.
Dec 09 14:51:34 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 09 14:51:34 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 09 14:51:34 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 09 14:51:34 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 09 14:51:34 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 09 14:51:34 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 14:51:34 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 09 14:51:34 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 09 14:51:34 localhost systemd[1]: Reached target Network.
Dec 09 14:51:34 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 09 14:51:34 localhost systemd[1]: Starting dracut initqueue hook...
Dec 09 14:51:34 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 14:51:34 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 14:51:35 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 09 14:51:35 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 09 14:51:35 localhost kernel:  vda: vda1
Dec 09 14:51:35 localhost kernel: libata version 3.00 loaded.
Dec 09 14:51:35 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 09 14:51:35 localhost kernel: scsi host0: ata_piix
Dec 09 14:51:35 localhost kernel: scsi host1: ata_piix
Dec 09 14:51:35 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 09 14:51:35 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 09 14:51:35 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 09 14:51:35 localhost systemd[1]: Reached target Initrd Root Device.
Dec 09 14:51:35 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 09 14:51:35 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 09 14:51:35 localhost systemd[1]: Reached target System Initialization.
Dec 09 14:51:35 localhost systemd[1]: Reached target Basic System.
Dec 09 14:51:35 localhost kernel: ata1: found unknown device (class 0)
Dec 09 14:51:35 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 09 14:51:35 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 09 14:51:35 localhost systemd-udevd[496]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 14:51:35 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 09 14:51:35 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 09 14:51:35 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 09 14:51:35 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 09 14:51:35 localhost systemd[1]: Finished dracut initqueue hook.
Dec 09 14:51:35 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 09 14:51:35 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 09 14:51:35 localhost systemd[1]: Reached target Remote File Systems.
Dec 09 14:51:35 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 09 14:51:35 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 09 14:51:35 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 09 14:51:35 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Dec 09 14:51:35 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 09 14:51:35 localhost systemd[1]: Mounting /sysroot...
Dec 09 14:51:36 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 09 14:51:36 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 09 14:51:36 localhost kernel: XFS (vda1): Ending clean mount
Dec 09 14:51:36 localhost systemd[1]: Mounted /sysroot.
Dec 09 14:51:36 localhost systemd[1]: Reached target Initrd Root File System.
Dec 09 14:51:36 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 09 14:51:36 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 09 14:51:36 localhost systemd[1]: Reached target Initrd File Systems.
Dec 09 14:51:36 localhost systemd[1]: Reached target Initrd Default Target.
Dec 09 14:51:36 localhost systemd[1]: Starting dracut mount hook...
Dec 09 14:51:36 localhost systemd[1]: Finished dracut mount hook.
Dec 09 14:51:36 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 09 14:51:36 localhost rpc.idmapd[452]: exiting on signal 15
Dec 09 14:51:36 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 09 14:51:36 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 09 14:51:36 localhost systemd[1]: Stopped target Network.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Timer Units.
Dec 09 14:51:36 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 09 14:51:36 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Basic System.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Path Units.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Remote File Systems.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Slice Units.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Socket Units.
Dec 09 14:51:36 localhost systemd[1]: Stopped target System Initialization.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Local File Systems.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Swaps.
Dec 09 14:51:36 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped dracut mount hook.
Dec 09 14:51:36 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 09 14:51:36 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 09 14:51:36 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 09 14:51:36 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 09 14:51:36 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 09 14:51:36 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 09 14:51:36 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 09 14:51:36 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 09 14:51:36 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 09 14:51:36 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 09 14:51:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 09 14:51:36 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 09 14:51:36 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Closed udev Control Socket.
Dec 09 14:51:36 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Closed udev Kernel Socket.
Dec 09 14:51:36 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 09 14:51:36 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 09 14:51:36 localhost systemd[1]: Starting Cleanup udev Database...
Dec 09 14:51:36 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 09 14:51:36 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 09 14:51:36 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Stopped Create System Users.
Dec 09 14:51:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 09 14:51:36 localhost systemd[1]: Finished Cleanup udev Database.
Dec 09 14:51:36 localhost systemd[1]: Reached target Switch Root.
Dec 09 14:51:36 localhost systemd[1]: Starting Switch Root...
Dec 09 14:51:36 localhost systemd[1]: Switching root.
Dec 09 14:51:36 localhost systemd-journald[307]: Journal stopped
Dec 09 14:51:37 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Dec 09 14:51:37 localhost kernel: audit: type=1404 audit(1765291896.400:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 09 14:51:37 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 14:51:37 localhost kernel: SELinux:  policy capability open_perms=1
Dec 09 14:51:37 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 14:51:37 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 09 14:51:37 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 14:51:37 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 14:51:37 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 14:51:37 localhost kernel: audit: type=1403 audit(1765291896.526:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 09 14:51:37 localhost systemd[1]: Successfully loaded SELinux policy in 131.251ms.
Dec 09 14:51:37 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.858ms.
Dec 09 14:51:37 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 09 14:51:37 localhost systemd[1]: Detected virtualization kvm.
Dec 09 14:51:37 localhost systemd[1]: Detected architecture x86-64.
Dec 09 14:51:37 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 14:51:37 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 09 14:51:37 localhost systemd[1]: Stopped Switch Root.
Dec 09 14:51:37 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 09 14:51:37 localhost systemd[1]: Created slice Slice /system/getty.
Dec 09 14:51:37 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 09 14:51:37 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 09 14:51:37 localhost systemd[1]: Created slice User and Session Slice.
Dec 09 14:51:37 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 09 14:51:37 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 09 14:51:37 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 09 14:51:37 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 09 14:51:37 localhost systemd[1]: Stopped target Switch Root.
Dec 09 14:51:37 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 09 14:51:37 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 09 14:51:37 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 09 14:51:37 localhost systemd[1]: Reached target Path Units.
Dec 09 14:51:37 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 09 14:51:37 localhost systemd[1]: Reached target Slice Units.
Dec 09 14:51:37 localhost systemd[1]: Reached target Swaps.
Dec 09 14:51:37 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 09 14:51:37 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 09 14:51:37 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 09 14:51:37 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 09 14:51:37 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 09 14:51:37 localhost systemd[1]: Listening on udev Control Socket.
Dec 09 14:51:37 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 09 14:51:37 localhost systemd[1]: Mounting Huge Pages File System...
Dec 09 14:51:37 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 09 14:51:37 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 09 14:51:37 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 09 14:51:37 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 09 14:51:37 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 09 14:51:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 14:51:37 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 09 14:51:37 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 09 14:51:37 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 09 14:51:37 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 09 14:51:37 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 09 14:51:37 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 09 14:51:37 localhost systemd[1]: Stopped Journal Service.
Dec 09 14:51:37 localhost systemd[1]: Starting Journal Service...
Dec 09 14:51:37 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 09 14:51:37 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 09 14:51:37 localhost kernel: fuse: init (API version 7.37)
Dec 09 14:51:37 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 14:51:37 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 09 14:51:37 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 09 14:51:37 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 09 14:51:37 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 09 14:51:37 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 09 14:51:37 localhost systemd[1]: Mounted Huge Pages File System.
Dec 09 14:51:37 localhost systemd-journald[675]: Journal started
Dec 09 14:51:37 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 09 14:51:36 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 09 14:51:36 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 09 14:51:37 localhost systemd[1]: Started Journal Service.
Dec 09 14:51:37 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 09 14:51:37 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 09 14:51:37 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 09 14:51:37 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 09 14:51:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 14:51:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 14:51:37 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 09 14:51:37 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 09 14:51:37 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 09 14:51:37 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 09 14:51:37 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 09 14:51:37 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 09 14:51:37 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 09 14:51:37 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 09 14:51:37 localhost systemd[1]: Mounting FUSE Control File System...
Dec 09 14:51:37 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 09 14:51:37 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 09 14:51:37 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 09 14:51:37 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 09 14:51:37 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 09 14:51:37 localhost systemd[1]: Starting Create System Users...
Dec 09 14:51:37 localhost kernel: ACPI: bus type drm_connector registered
Dec 09 14:51:37 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 09 14:51:37 localhost systemd-journald[675]: Received client request to flush runtime journal.
Dec 09 14:51:37 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 09 14:51:37 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 09 14:51:37 localhost systemd[1]: Mounted FUSE Control File System.
Dec 09 14:51:37 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 09 14:51:37 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 09 14:51:37 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 09 14:51:37 localhost systemd[1]: Finished Create System Users.
Dec 09 14:51:37 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 09 14:51:37 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 09 14:51:37 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 09 14:51:37 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 09 14:51:37 localhost systemd[1]: Reached target Local File Systems.
Dec 09 14:51:37 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 09 14:51:37 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 09 14:51:37 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 09 14:51:37 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 09 14:51:37 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 09 14:51:37 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 09 14:51:37 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 09 14:51:37 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Dec 09 14:51:37 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 09 14:51:37 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 09 14:51:37 localhost systemd[1]: Starting Security Auditing Service...
Dec 09 14:51:37 localhost systemd[1]: Starting RPC Bind...
Dec 09 14:51:37 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 09 14:51:37 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 09 14:51:37 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 09 14:51:37 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 09 14:51:37 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 09 14:51:37 localhost systemd[1]: Started RPC Bind.
Dec 09 14:51:37 localhost augenrules[705]: /sbin/augenrules: No change
Dec 09 14:51:37 localhost augenrules[720]: No rules
Dec 09 14:51:37 localhost augenrules[720]: enabled 1
Dec 09 14:51:37 localhost augenrules[720]: failure 1
Dec 09 14:51:37 localhost augenrules[720]: pid 700
Dec 09 14:51:37 localhost augenrules[720]: rate_limit 0
Dec 09 14:51:37 localhost augenrules[720]: backlog_limit 8192
Dec 09 14:51:37 localhost augenrules[720]: lost 0
Dec 09 14:51:37 localhost augenrules[720]: backlog 0
Dec 09 14:51:37 localhost augenrules[720]: backlog_wait_time 60000
Dec 09 14:51:37 localhost augenrules[720]: backlog_wait_time_actual 0
Dec 09 14:51:37 localhost augenrules[720]: enabled 1
Dec 09 14:51:37 localhost augenrules[720]: failure 1
Dec 09 14:51:37 localhost augenrules[720]: pid 700
Dec 09 14:51:37 localhost augenrules[720]: rate_limit 0
Dec 09 14:51:37 localhost augenrules[720]: backlog_limit 8192
Dec 09 14:51:37 localhost augenrules[720]: lost 0
Dec 09 14:51:37 localhost augenrules[720]: backlog 0
Dec 09 14:51:37 localhost augenrules[720]: backlog_wait_time 60000
Dec 09 14:51:37 localhost augenrules[720]: backlog_wait_time_actual 0
Dec 09 14:51:37 localhost augenrules[720]: enabled 1
Dec 09 14:51:37 localhost augenrules[720]: failure 1
Dec 09 14:51:37 localhost augenrules[720]: pid 700
Dec 09 14:51:37 localhost augenrules[720]: rate_limit 0
Dec 09 14:51:37 localhost augenrules[720]: backlog_limit 8192
Dec 09 14:51:37 localhost augenrules[720]: lost 0
Dec 09 14:51:37 localhost augenrules[720]: backlog 0
Dec 09 14:51:37 localhost augenrules[720]: backlog_wait_time 60000
Dec 09 14:51:37 localhost augenrules[720]: backlog_wait_time_actual 0
Dec 09 14:51:37 localhost systemd[1]: Started Security Auditing Service.
Dec 09 14:51:37 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 09 14:51:37 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 09 14:51:37 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 09 14:51:37 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 09 14:51:37 localhost systemd[1]: Starting Update is Completed...
Dec 09 14:51:37 localhost systemd[1]: Finished Update is Completed.
Dec 09 14:51:37 localhost systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Dec 09 14:51:37 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 09 14:51:37 localhost systemd[1]: Reached target System Initialization.
Dec 09 14:51:37 localhost systemd[1]: Started dnf makecache --timer.
Dec 09 14:51:37 localhost systemd[1]: Started Daily rotation of log files.
Dec 09 14:51:37 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 09 14:51:37 localhost systemd[1]: Reached target Timer Units.
Dec 09 14:51:37 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 09 14:51:37 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 09 14:51:37 localhost systemd[1]: Reached target Socket Units.
Dec 09 14:51:37 localhost systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 14:51:37 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 09 14:51:37 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 14:51:37 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 09 14:51:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 14:51:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 14:51:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 14:51:37 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 09 14:51:37 localhost systemd[1]: Reached target Basic System.
Dec 09 14:51:37 localhost dbus-broker-lau[750]: Ready
Dec 09 14:51:37 localhost systemd[1]: Starting NTP client/server...
Dec 09 14:51:37 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 09 14:51:37 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 09 14:51:37 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 09 14:51:37 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 09 14:51:37 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 09 14:51:37 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 09 14:51:37 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 09 14:51:37 localhost systemd[1]: Started irqbalance daemon.
Dec 09 14:51:37 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 09 14:51:37 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 14:51:37 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 14:51:37 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 14:51:37 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 09 14:51:37 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 09 14:51:37 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 09 14:51:37 localhost systemd[1]: Starting User Login Management...
Dec 09 14:51:37 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 09 14:51:37 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 09 14:51:37 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 09 14:51:37 localhost kernel: Console: switching to colour dummy device 80x25
Dec 09 14:51:37 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 09 14:51:37 localhost kernel: [drm] features: -context_init
Dec 09 14:51:37 localhost chronyd[793]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 09 14:51:37 localhost chronyd[793]: Loaded 0 symmetric keys
Dec 09 14:51:37 localhost chronyd[793]: Using right/UTC timezone to obtain leap second data
Dec 09 14:51:37 localhost chronyd[793]: Loaded seccomp filter (level 2)
Dec 09 14:51:37 localhost systemd[1]: Started NTP client/server.
Dec 09 14:51:37 localhost kernel: [drm] number of scanouts: 1
Dec 09 14:51:37 localhost kernel: [drm] number of cap sets: 0
Dec 09 14:51:37 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 09 14:51:37 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 09 14:51:37 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 09 14:51:37 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 09 14:51:37 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 09 14:51:37 localhost systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 09 14:51:37 localhost systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 09 14:51:37 localhost systemd-logind[786]: New seat seat0.
Dec 09 14:51:37 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 09 14:51:37 localhost systemd[1]: Started User Login Management.
Dec 09 14:51:38 localhost kernel: kvm_amd: TSC scaling supported
Dec 09 14:51:38 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 09 14:51:38 localhost kernel: kvm_amd: Nested Paging enabled
Dec 09 14:51:38 localhost kernel: kvm_amd: LBR virtualization supported
Dec 09 14:51:38 localhost iptables.init[778]: iptables: Applying firewall rules: [  OK  ]
Dec 09 14:51:38 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 09 14:51:38 localhost cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 09 Dec 2025 14:51:38 +0000. Up 5.96 seconds.
Dec 09 14:51:38 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 09 14:51:38 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 09 14:51:38 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpkm8ih5hu.mount: Deactivated successfully.
Dec 09 14:51:38 localhost systemd[1]: Starting Hostname Service...
Dec 09 14:51:38 localhost systemd[1]: Started Hostname Service.
Dec 09 14:51:38 np0005552052.novalocal systemd-hostnamed[851]: Hostname set to <np0005552052.novalocal> (static)
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Reached target Preparation for Network.
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Starting Network Manager...
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8420] NetworkManager (version 1.54.2-1.el9) is starting... (boot:a58260a5-f855-49b9-849b-ff1e8bfdaaf7)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8424] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8488] manager[0x564deab54000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8520] hostname: hostname: using hostnamed
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8520] hostname: static hostname changed from (none) to "np0005552052.novalocal"
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8524] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8627] manager[0x564deab54000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8628] manager[0x564deab54000]: rfkill: WWAN hardware radio set enabled
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8672] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8673] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8673] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8674] manager: Networking is enabled by state file
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8676] settings: Loaded settings plugin: keyfile (internal)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8694] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8711] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8723] dhcp: init: Using DHCP client 'internal'
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8725] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8738] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8745] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8754] device (lo): Activation: starting connection 'lo' (1bb1a588-fc76-48e8-baa9-019c5d49bc8e)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8763] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8766] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8794] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8798] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8800] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8803] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8805] device (eth0): carrier: link connected
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8809] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8816] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8822] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8826] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8827] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8829] manager: NetworkManager state is now CONNECTING
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8830] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8836] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8840] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8876] dhcp4 (eth0): state changed new lease, address=38.102.83.184
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8883] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.8903] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Started Network Manager.
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Reached target Network.
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9129] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9133] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9134] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9139] device (lo): Activation: successful, device activated.
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9145] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9148] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9150] device (eth0): Activation: successful, device activated.
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9156] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 14:51:38 np0005552052.novalocal NetworkManager[855]: <info>  [1765291898.9160] manager: startup complete
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Reached target NFS client services.
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Reached target Remote File Systems.
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 09 14:51:38 np0005552052.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 09 Dec 2025 14:51:39 +0000. Up 6.88 seconds.
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.184         | 255.255.255.0 | global | fa:16:3e:fd:d6:6d |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fefd:d66d/64 |       .       |  link  | fa:16:3e:fd:d6:6d |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 09 14:51:39 np0005552052.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 14:51:40 np0005552052.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Dec 09 14:51:40 np0005552052.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 09 14:51:40 np0005552052.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Dec 09 14:51:40 np0005552052.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Dec 09 14:51:40 np0005552052.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Dec 09 14:51:40 np0005552052.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Generating public/private rsa key pair.
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: The key fingerprint is:
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: SHA256:415Ua7xEXdNHP27rRiR8kr03ibvCqLtdYB7WCNOuSuM root@np0005552052.novalocal
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: The key's randomart image is:
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: +---[RSA 3072]----+
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |               o+|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |        .    . .=|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |       o .  + +.o|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |        + o+ *.+.|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |        SB..= *oo|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |       .=ooo o.=o|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |     o ...+.. o.o|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |    o o..o.o ... |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |     E +=.  ..o. |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: The key fingerprint is:
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: SHA256:1x9cVQZEoiG9cu6SuiQliONsIAz0lelL+dGFmuy71lo root@np0005552052.novalocal
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: The key's randomart image is:
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: +---[ECDSA 256]---+
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: | .   .o ..o .o+.=|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |. . .o   o.+ . ..|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |.  .. o + o.    .|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |o. . + =..o. . . |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |=.. o = S+. . o  |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |=.   + o ..  . . |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: | +  . . oE    .  |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |.    o o+..      |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |      +=o.       |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: The key fingerprint is:
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: SHA256:pbb1NbO1gtH2iQ4dZsIkWnt6P98u0xQnv3UehpNgdko root@np0005552052.novalocal
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: The key's randomart image is:
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: +--[ED25519 256]--+
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |                 |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |                 |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |         o..     |
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |        oo=E.....|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |       .S.==+**o+|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |       . oooO=+BB|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |        .. +.++B=|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |          . +.+.+|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: |             oo=o|
Dec 09 14:51:40 np0005552052.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Reached target Network is Online.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Starting System Logging Service...
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 09 14:51:40 np0005552052.novalocal sm-notify[1003]: Version 2.5.4 starting
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Starting Permit User Sessions...
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 09 14:51:40 np0005552052.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 09 14:51:40 np0005552052.novalocal sshd[1005]: Server listening on :: port 22.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Finished Permit User Sessions.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Started Command Scheduler.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Started Getty on tty1.
Dec 09 14:51:40 np0005552052.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Dec 09 14:51:40 np0005552052.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 09 14:51:40 np0005552052.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 96% if used.)
Dec 09 14:51:40 np0005552052.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Reached target Login Prompts.
Dec 09 14:51:40 np0005552052.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec 09 14:51:40 np0005552052.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Started System Logging Service.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Reached target Multi-User System.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 09 14:51:40 np0005552052.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 14:51:40 np0005552052.novalocal kdumpctl[1013]: kdump: No kdump initial ramdisk found.
Dec 09 14:51:40 np0005552052.novalocal kdumpctl[1013]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1080]: Connection reset by 38.102.83.114 port 49810 [preauth]
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1099]: Unable to negotiate with 38.102.83.114 port 49822: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1108]: Connection reset by 38.102.83.114 port 49836 [preauth]
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1119]: Unable to negotiate with 38.102.83.114 port 49850: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1130]: Unable to negotiate with 38.102.83.114 port 49858: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1141]: Connection reset by 38.102.83.114 port 49868 [preauth]
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1156]: Unable to negotiate with 38.102.83.114 port 49886: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 09 14:51:40 np0005552052.novalocal cloud-init[1158]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 09 Dec 2025 14:51:40 +0000. Up 8.35 seconds.
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1162]: Unable to negotiate with 38.102.83.114 port 49898: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 09 14:51:40 np0005552052.novalocal sshd-session[1148]: Connection closed by 38.102.83.114 port 49880 [preauth]
Dec 09 14:51:40 np0005552052.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 09 14:51:41 np0005552052.novalocal dracut[1282]: dracut-057-102.git20250818.el9
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1300]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 09 Dec 2025 14:51:41 +0000. Up 8.75 seconds.
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1329]: #############################################################
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1332]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1342]: 256 SHA256:1x9cVQZEoiG9cu6SuiQliONsIAz0lelL+dGFmuy71lo root@np0005552052.novalocal (ECDSA)
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1350]: 256 SHA256:pbb1NbO1gtH2iQ4dZsIkWnt6P98u0xQnv3UehpNgdko root@np0005552052.novalocal (ED25519)
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1357]: 3072 SHA256:415Ua7xEXdNHP27rRiR8kr03ibvCqLtdYB7WCNOuSuM root@np0005552052.novalocal (RSA)
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1359]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1362]: #############################################################
Dec 09 14:51:41 np0005552052.novalocal cloud-init[1300]: Cloud-init v. 24.4-7.el9 finished at Tue, 09 Dec 2025 14:51:41 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 8.93 seconds
Dec 09 14:51:41 np0005552052.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 09 14:51:41 np0005552052.novalocal systemd[1]: Reached target Cloud-init target.
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 09 14:51:41 np0005552052.novalocal dracut[1284]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: memstrack is not available
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: memstrack is not available
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: *** Including module: systemd ***
Dec 09 14:51:42 np0005552052.novalocal dracut[1284]: *** Including module: fips ***
Dec 09 14:51:43 np0005552052.novalocal dracut[1284]: *** Including module: systemd-initrd ***
Dec 09 14:51:43 np0005552052.novalocal dracut[1284]: *** Including module: i18n ***
Dec 09 14:51:43 np0005552052.novalocal dracut[1284]: *** Including module: drm ***
Dec 09 14:51:43 np0005552052.novalocal dracut[1284]: *** Including module: prefixdevname ***
Dec 09 14:51:43 np0005552052.novalocal dracut[1284]: *** Including module: kernel-modules ***
Dec 09 14:51:43 np0005552052.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 09 14:51:43 np0005552052.novalocal chronyd[793]: Selected source 206.108.0.131 (2.centos.pool.ntp.org)
Dec 09 14:51:43 np0005552052.novalocal chronyd[793]: System clock TAI offset set to 37 seconds
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: kernel-modules-extra ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: qemu ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: fstab-sys ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: rootfs-block ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: terminfo ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: udev-rules ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: Skipping udev rule: 91-permissions.rules
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: virtiofs ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: dracut-systemd ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: usrmount ***
Dec 09 14:51:44 np0005552052.novalocal dracut[1284]: *** Including module: base ***
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]: *** Including module: fs-lib ***
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]: *** Including module: kdumpbase ***
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:   microcode_ctl module: mangling fw_dir
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]: *** Including module: openssl ***
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]: *** Including module: shutdown ***
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]: *** Including module: squash ***
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]: *** Including modules done ***
Dec 09 14:51:45 np0005552052.novalocal dracut[1284]: *** Installing kernel module dependencies ***
Dec 09 14:51:46 np0005552052.novalocal dracut[1284]: *** Installing kernel module dependencies done ***
Dec 09 14:51:46 np0005552052.novalocal dracut[1284]: *** Resolving executable dependencies ***
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: IRQ 25 affinity is now unmanaged
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: IRQ 31 affinity is now unmanaged
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: IRQ 28 affinity is now unmanaged
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: IRQ 32 affinity is now unmanaged
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: IRQ 30 affinity is now unmanaged
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 09 14:51:48 np0005552052.novalocal irqbalance[780]: IRQ 29 affinity is now unmanaged
Dec 09 14:51:48 np0005552052.novalocal dracut[1284]: *** Resolving executable dependencies done ***
Dec 09 14:51:48 np0005552052.novalocal dracut[1284]: *** Generating early-microcode cpio image ***
Dec 09 14:51:48 np0005552052.novalocal dracut[1284]: *** Store current command line parameters ***
Dec 09 14:51:48 np0005552052.novalocal dracut[1284]: Stored kernel commandline:
Dec 09 14:51:48 np0005552052.novalocal dracut[1284]: No dracut internal kernel commandline stored in the initramfs
Dec 09 14:51:48 np0005552052.novalocal dracut[1284]: *** Install squash loader ***
Dec 09 14:51:49 np0005552052.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 14:51:49 np0005552052.novalocal dracut[1284]: *** Squashing the files inside the initramfs ***
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: *** Squashing the files inside the initramfs done ***
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: *** Hardlinking files ***
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: Mode:           real
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: Files:          50
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: Linked:         0 files
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: Compared:       0 xattrs
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: Compared:       0 files
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: Saved:          0 B
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: Duration:       0.000504 seconds
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: *** Hardlinking files done ***
Dec 09 14:51:50 np0005552052.novalocal dracut[1284]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 09 14:51:51 np0005552052.novalocal kdumpctl[1013]: kdump: kexec: loaded kdump kernel
Dec 09 14:51:51 np0005552052.novalocal kdumpctl[1013]: kdump: Starting kdump: [OK]
Dec 09 14:51:51 np0005552052.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 09 14:51:51 np0005552052.novalocal systemd[1]: Startup finished in 1.530s (kernel) + 2.510s (initrd) + 14.939s (userspace) = 18.980s.
Dec 09 14:52:08 np0005552052.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 15:01:01 np0005552052.novalocal CROND[4299]: (root) CMD (run-parts /etc/cron.hourly)
Dec 09 15:01:01 np0005552052.novalocal run-parts[4302]: (/etc/cron.hourly) starting 0anacron
Dec 09 15:01:01 np0005552052.novalocal anacron[4310]: Anacron started on 2025-12-09
Dec 09 15:01:01 np0005552052.novalocal anacron[4310]: Will run job `cron.daily' in 39 min.
Dec 09 15:01:01 np0005552052.novalocal anacron[4310]: Will run job `cron.weekly' in 59 min.
Dec 09 15:01:01 np0005552052.novalocal anacron[4310]: Will run job `cron.monthly' in 79 min.
Dec 09 15:01:01 np0005552052.novalocal anacron[4310]: Jobs will be executed sequentially
Dec 09 15:01:01 np0005552052.novalocal run-parts[4312]: (/etc/cron.hourly) finished 0anacron
Dec 09 15:01:01 np0005552052.novalocal CROND[4298]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 09 15:06:46 np0005552052.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Dec 09 15:06:46 np0005552052.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 09 15:06:46 np0005552052.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Dec 09 15:06:46 np0005552052.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 09 15:08:39 np0005552052.novalocal sshd-session[4320]: Invalid user ubuntu from 58.82.169.249 port 35218
Dec 09 15:08:39 np0005552052.novalocal sshd-session[4320]: Received disconnect from 58.82.169.249 port 35218:11:  [preauth]
Dec 09 15:08:39 np0005552052.novalocal sshd-session[4320]: Disconnected from invalid user ubuntu 58.82.169.249 port 35218 [preauth]
Dec 09 15:19:48 np0005552052.novalocal sshd-session[4326]: Received disconnect from 45.78.206.111 port 39418:11: Bye Bye [preauth]
Dec 09 15:19:48 np0005552052.novalocal sshd-session[4326]: Disconnected from authenticating user root 45.78.206.111 port 39418 [preauth]
Dec 09 15:23:05 np0005552052.novalocal sshd-session[4330]: Invalid user kali from 45.78.206.111 port 52298
Dec 09 15:23:06 np0005552052.novalocal sshd-session[4330]: Received disconnect from 45.78.206.111 port 52298:11: Bye Bye [preauth]
Dec 09 15:23:06 np0005552052.novalocal sshd-session[4330]: Disconnected from invalid user kali 45.78.206.111 port 52298 [preauth]
Dec 09 15:27:58 np0005552052.novalocal sshd-session[4335]: Accepted publickey for zuul from 38.102.83.114 port 48740 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 09 15:27:58 np0005552052.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 09 15:27:58 np0005552052.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 09 15:27:58 np0005552052.novalocal systemd-logind[786]: New session 1 of user zuul.
Dec 09 15:27:58 np0005552052.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 09 15:27:58 np0005552052.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 09 15:27:58 np0005552052.novalocal systemd[4339]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Queued start job for default target Main User Target.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Created slice User Application Slice.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Reached target Paths.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Reached target Timers.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Starting D-Bus User Message Bus Socket...
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Starting Create User's Volatile Files and Directories...
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Listening on D-Bus User Message Bus Socket.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Reached target Sockets.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Finished Create User's Volatile Files and Directories.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Reached target Basic System.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Reached target Main User Target.
Dec 09 15:27:59 np0005552052.novalocal systemd[4339]: Startup finished in 128ms.
Dec 09 15:27:59 np0005552052.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 09 15:27:59 np0005552052.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 09 15:27:59 np0005552052.novalocal sshd-session[4335]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:27:59 np0005552052.novalocal python3[4423]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:28:02 np0005552052.novalocal python3[4451]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:28:08 np0005552052.novalocal irqbalance[780]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 09 15:28:08 np0005552052.novalocal irqbalance[780]: IRQ 26 affinity is now unmanaged
Dec 09 15:28:08 np0005552052.novalocal python3[4509]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:28:09 np0005552052.novalocal python3[4549]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 09 15:28:11 np0005552052.novalocal python3[4575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCajeqGRUaF+TxFywtXlTdyZcTp7z4khN2cUrKXznT/76sluSM2YCcNNXocWScVqM6u/9eCURlX40iKpXQiEZsHtoXQmK0fzIHJ0QiMy6g0kvxEwsQpnycnyuFLxNFwiC6bLn0YregMD8KSjC7FiE4Y8vBACY0kKVYD3WVVdr2z20x88lKvCq3N+jleQdJXkiDaciZhqAgcXfjXZ05+RzaC6UJ2H9athNyI/VitRu+klASI0/dwiRLx5XrBXo68Hh3XTuN823bdjsNS1oAt6MwMzlSSn24hBiYfCRUXM9yDxvAYynUUSx1WUyjBmwjo9qzre3n4MCF6/GRtlHaZH8fFNj783Jdnlpw4DfzwFSnSoeWxSN4tCQT0w7iKIq70Y2n0DTWUYgYUwXDkbqECiGAVAgv6RileSKnZ133+57QNee2ZShU+HcRAZe0rlk+ebE+dSMOXpYczYOYGsdQMlXyonAuYT2evS0Ws/OS9PtSTLNn4IVx9ez6pJJD+iH+s4NE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:12 np0005552052.novalocal python3[4599]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:12 np0005552052.novalocal python3[4698]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:28:13 np0005552052.novalocal python3[4769]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765294092.4408872-207-229910259300298/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=13a5b959112f4f4680367c7b4e76e1b9_id_rsa follow=False checksum=f2f07046f4a6ba9d069e798981173ef99f50b617 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:13 np0005552052.novalocal python3[4892]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:28:13 np0005552052.novalocal python3[4963]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765294093.2532878-240-50855603600903/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=13a5b959112f4f4680367c7b4e76e1b9_id_rsa.pub follow=False checksum=18927fa4ad59fb4cb3c02595c58f565709f6599f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:15 np0005552052.novalocal python3[5011]: ansible-ping Invoked with data=pong
Dec 09 15:28:16 np0005552052.novalocal python3[5035]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:28:17 np0005552052.novalocal python3[5093]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 09 15:28:18 np0005552052.novalocal python3[5127]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:19 np0005552052.novalocal python3[5151]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:19 np0005552052.novalocal python3[5175]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:19 np0005552052.novalocal sshd-session[5096]: Received disconnect from 45.78.206.111 port 53398:11: Bye Bye [preauth]
Dec 09 15:28:19 np0005552052.novalocal sshd-session[5096]: Disconnected from authenticating user root 45.78.206.111 port 53398 [preauth]
Dec 09 15:28:19 np0005552052.novalocal python3[5199]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:19 np0005552052.novalocal python3[5223]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:20 np0005552052.novalocal python3[5247]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:21 np0005552052.novalocal sudo[5271]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvgiftuntmpizlopyrqujymehmqcrlqf ; /usr/bin/python3'
Dec 09 15:28:21 np0005552052.novalocal sudo[5271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:21 np0005552052.novalocal python3[5273]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:21 np0005552052.novalocal sudo[5271]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:22 np0005552052.novalocal sudo[5350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pigghlektdgbevqgxkjwqlsalzlboqoz ; /usr/bin/python3'
Dec 09 15:28:22 np0005552052.novalocal sudo[5350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:22 np0005552052.novalocal python3[5352]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:28:22 np0005552052.novalocal sudo[5350]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:22 np0005552052.novalocal sudo[5423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znkranimyzgdsgpdjaxndxifxyynslwd ; /usr/bin/python3'
Dec 09 15:28:22 np0005552052.novalocal sudo[5423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:22 np0005552052.novalocal python3[5425]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765294101.8935516-21-252682200560441/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:22 np0005552052.novalocal sudo[5423]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:23 np0005552052.novalocal python3[5473]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:23 np0005552052.novalocal python3[5497]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:24 np0005552052.novalocal python3[5521]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:24 np0005552052.novalocal python3[5545]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:24 np0005552052.novalocal python3[5569]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:24 np0005552052.novalocal python3[5593]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:25 np0005552052.novalocal python3[5617]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:25 np0005552052.novalocal python3[5641]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:25 np0005552052.novalocal python3[5665]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:26 np0005552052.novalocal python3[5689]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:26 np0005552052.novalocal python3[5713]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:26 np0005552052.novalocal python3[5737]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:26 np0005552052.novalocal python3[5761]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:27 np0005552052.novalocal python3[5785]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:27 np0005552052.novalocal python3[5809]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:27 np0005552052.novalocal python3[5833]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:28 np0005552052.novalocal python3[5857]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:28 np0005552052.novalocal python3[5881]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:28 np0005552052.novalocal python3[5905]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:28 np0005552052.novalocal python3[5929]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:29 np0005552052.novalocal python3[5953]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:29 np0005552052.novalocal python3[5977]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:29 np0005552052.novalocal python3[6001]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:30 np0005552052.novalocal python3[6025]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:30 np0005552052.novalocal python3[6049]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:30 np0005552052.novalocal python3[6073]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:28:33 np0005552052.novalocal sudo[6097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygebsfzydzvjobwvugnrnhlxdbhpdnms ; /usr/bin/python3'
Dec 09 15:28:33 np0005552052.novalocal sudo[6097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:33 np0005552052.novalocal python3[6099]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 09 15:28:33 np0005552052.novalocal systemd[1]: Starting Time & Date Service...
Dec 09 15:28:33 np0005552052.novalocal systemd[1]: Started Time & Date Service.
Dec 09 15:28:33 np0005552052.novalocal systemd-timedated[6101]: Changed time zone to 'UTC' (UTC).
Dec 09 15:28:33 np0005552052.novalocal sudo[6097]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:33 np0005552052.novalocal sudo[6128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhfnuififsmujlmcijncbyazwxqigxih ; /usr/bin/python3'
Dec 09 15:28:33 np0005552052.novalocal sudo[6128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:33 np0005552052.novalocal python3[6130]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:34 np0005552052.novalocal sudo[6128]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:34 np0005552052.novalocal python3[6206]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:28:34 np0005552052.novalocal python3[6277]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765294114.2065454-153-193920596752909/source _original_basename=tmp6d08c0wa follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:35 np0005552052.novalocal python3[6377]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:28:35 np0005552052.novalocal python3[6448]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765294115.0996704-183-184772527822972/source _original_basename=tmp7rtipbem follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:36 np0005552052.novalocal sudo[6548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrqideyaxcmnglcdxhtlmalownftzeby ; /usr/bin/python3'
Dec 09 15:28:36 np0005552052.novalocal sudo[6548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:36 np0005552052.novalocal python3[6550]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:28:36 np0005552052.novalocal sudo[6548]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:36 np0005552052.novalocal sudo[6621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djuexlcwxtnxqzfasnnnidfanijovvmr ; /usr/bin/python3'
Dec 09 15:28:36 np0005552052.novalocal sudo[6621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:36 np0005552052.novalocal python3[6623]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765294116.1576533-231-174758741972546/source _original_basename=tmp1znltdkh follow=False checksum=faf3cf8b99afe143012f152c554f05a914a3872d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:36 np0005552052.novalocal sudo[6621]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:37 np0005552052.novalocal python3[6671]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:28:37 np0005552052.novalocal python3[6697]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:28:38 np0005552052.novalocal sudo[6775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxlxllzekxhkpdgdjxprzmiboncmyjpa ; /usr/bin/python3'
Dec 09 15:28:38 np0005552052.novalocal sudo[6775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:38 np0005552052.novalocal python3[6777]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:28:38 np0005552052.novalocal sudo[6775]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:38 np0005552052.novalocal sudo[6848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vysktmfaovytccfotcvlqgamealhmuvu ; /usr/bin/python3'
Dec 09 15:28:38 np0005552052.novalocal sudo[6848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:38 np0005552052.novalocal python3[6850]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765294118.0290418-273-33224231191774/source _original_basename=tmp59tcd_0p follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:38 np0005552052.novalocal sudo[6848]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:39 np0005552052.novalocal sudo[6899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwlyiufwumsjsthztprilwqponbvoryd ; /usr/bin/python3'
Dec 09 15:28:39 np0005552052.novalocal sudo[6899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:39 np0005552052.novalocal python3[6901]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-902e-c2de-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:28:39 np0005552052.novalocal sudo[6899]: pam_unix(sudo:session): session closed for user root
Dec 09 15:28:40 np0005552052.novalocal python3[6929]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-902e-c2de-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 09 15:28:41 np0005552052.novalocal python3[6957]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:50 np0005552052.novalocal sshd-session[6958]: Connection closed by 87.236.176.212 port 59573
Dec 09 15:28:51 np0005552052.novalocal sshd-session[6959]: Connection closed by 87.236.176.212 port 58695 [preauth]
Dec 09 15:28:59 np0005552052.novalocal sudo[6984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aydutflrzybwyhjynqxtvkplvynubumv ; /usr/bin/python3'
Dec 09 15:28:59 np0005552052.novalocal sudo[6984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:28:59 np0005552052.novalocal python3[6986]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:28:59 np0005552052.novalocal sudo[6984]: pam_unix(sudo:session): session closed for user root
Dec 09 15:29:03 np0005552052.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 09 15:29:33 np0005552052.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 09 15:29:33 np0005552052.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6385] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 15:29:33 np0005552052.novalocal systemd-udevd[6990]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6570] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6600] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6603] device (eth1): carrier: link connected
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6605] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6611] policy: auto-activating connection 'Wired connection 1' (9e208782-6966-35e9-a939-9a846f39c3da)
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6616] device (eth1): Activation: starting connection 'Wired connection 1' (9e208782-6966-35e9-a939-9a846f39c3da)
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6617] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6620] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6623] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:29:33 np0005552052.novalocal NetworkManager[855]: <info>  [1765294173.6627] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 15:29:34 np0005552052.novalocal python3[7016]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-b07d-56ca-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:29:44 np0005552052.novalocal sudo[7094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrsbbjmibablxkcyrajavcweglfgqocn ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 15:29:44 np0005552052.novalocal sudo[7094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:29:44 np0005552052.novalocal python3[7096]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:29:44 np0005552052.novalocal sudo[7094]: pam_unix(sudo:session): session closed for user root
Dec 09 15:29:44 np0005552052.novalocal sudo[7167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rolsuurwfnctidrnctaflngglnuhxkzj ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 15:29:44 np0005552052.novalocal sudo[7167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:29:45 np0005552052.novalocal python3[7169]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765294184.4033675-102-224505552608258/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=0d36724261da39a94482e744a08ff1edc04b67ea backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:29:45 np0005552052.novalocal sudo[7167]: pam_unix(sudo:session): session closed for user root
Dec 09 15:29:45 np0005552052.novalocal sudo[7217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grmxzwpsgpxqzcjxjzyfigkxmpagzajz ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 15:29:45 np0005552052.novalocal sudo[7217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:29:45 np0005552052.novalocal python3[7219]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 15:29:45 np0005552052.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 09 15:29:45 np0005552052.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 09 15:29:45 np0005552052.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 09 15:29:45 np0005552052.novalocal systemd[1]: Stopping Network Manager...
Dec 09 15:29:45 np0005552052.novalocal NetworkManager[855]: <info>  [1765294185.9515] caught SIGTERM, shutting down normally.
Dec 09 15:29:45 np0005552052.novalocal NetworkManager[855]: <info>  [1765294185.9526] dhcp4 (eth0): canceled DHCP transaction
Dec 09 15:29:45 np0005552052.novalocal NetworkManager[855]: <info>  [1765294185.9526] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 15:29:45 np0005552052.novalocal NetworkManager[855]: <info>  [1765294185.9526] dhcp4 (eth0): state changed no lease
Dec 09 15:29:45 np0005552052.novalocal NetworkManager[855]: <info>  [1765294185.9529] manager: NetworkManager state is now CONNECTING
Dec 09 15:29:45 np0005552052.novalocal NetworkManager[855]: <info>  [1765294185.9668] dhcp4 (eth1): canceled DHCP transaction
Dec 09 15:29:45 np0005552052.novalocal NetworkManager[855]: <info>  [1765294185.9669] dhcp4 (eth1): state changed no lease
Dec 09 15:29:45 np0005552052.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 15:29:45 np0005552052.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[855]: <info>  [1765294186.1603] exiting (success)
Dec 09 15:29:46 np0005552052.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 09 15:29:46 np0005552052.novalocal systemd[1]: Stopped Network Manager.
Dec 09 15:29:46 np0005552052.novalocal systemd[1]: NetworkManager.service: Consumed 12.432s CPU time, 9.9M memory peak.
Dec 09 15:29:46 np0005552052.novalocal systemd[1]: Starting Network Manager...
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.2180] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:a58260a5-f855-49b9-849b-ff1e8bfdaaf7)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.2181] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.2249] manager[0x563b91eb9000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 15:29:46 np0005552052.novalocal systemd[1]: Starting Hostname Service...
Dec 09 15:29:46 np0005552052.novalocal systemd[1]: Started Hostname Service.
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3082] hostname: hostname: using hostnamed
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3083] hostname: static hostname changed from (none) to "np0005552052.novalocal"
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3089] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3094] manager[0x563b91eb9000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3095] manager[0x563b91eb9000]: rfkill: WWAN hardware radio set enabled
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3131] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3132] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3132] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3133] manager: Networking is enabled by state file
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3135] settings: Loaded settings plugin: keyfile (internal)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3140] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3165] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3176] dhcp: init: Using DHCP client 'internal'
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3179] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3184] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3189] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3198] device (lo): Activation: starting connection 'lo' (1bb1a588-fc76-48e8-baa9-019c5d49bc8e)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3205] device (eth0): carrier: link connected
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3210] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3215] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3216] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3225] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3233] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3240] device (eth1): carrier: link connected
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3245] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3249] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (9e208782-6966-35e9-a939-9a846f39c3da) (indicated)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3250] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3255] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3263] device (eth1): Activation: starting connection 'Wired connection 1' (9e208782-6966-35e9-a939-9a846f39c3da)
Dec 09 15:29:46 np0005552052.novalocal systemd[1]: Started Network Manager.
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3271] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3277] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3280] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3282] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3284] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3287] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3289] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3291] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3293] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3299] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3301] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3311] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3314] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3335] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3338] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 15:29:46 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294186.3342] device (lo): Activation: successful, device activated.
Dec 09 15:29:46 np0005552052.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 09 15:29:46 np0005552052.novalocal sudo[7217]: pam_unix(sudo:session): session closed for user root
Dec 09 15:29:47 np0005552052.novalocal python3[7286]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-b07d-56ca-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:29:48 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294188.1380] dhcp4 (eth0): state changed new lease, address=38.102.83.184
Dec 09 15:29:48 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294188.1391] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 15:29:48 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294188.1461] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 15:29:48 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294188.1493] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 15:29:48 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294188.1494] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 15:29:48 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294188.1498] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 15:29:48 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294188.1502] device (eth0): Activation: successful, device activated.
Dec 09 15:29:48 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294188.1506] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 15:29:48 np0005552052.novalocal irqbalance[780]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 09 15:29:48 np0005552052.novalocal irqbalance[780]: IRQ 27 affinity is now unmanaged
Dec 09 15:29:58 np0005552052.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 15:30:16 np0005552052.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3581] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 15:30:31 np0005552052.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 15:30:31 np0005552052.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3839] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3844] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3851] device (eth1): Activation: successful, device activated.
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3859] manager: startup complete
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3860] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <warn>  [1765294231.3865] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3872] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 09 15:30:31 np0005552052.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3970] dhcp4 (eth1): canceled DHCP transaction
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3971] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3971] dhcp4 (eth1): state changed no lease
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3988] policy: auto-activating connection 'ci-private-network' (ac9b6745-2bc2-5ba9-b147-889934bdce51)
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3991] device (eth1): Activation: starting connection 'ci-private-network' (ac9b6745-2bc2-5ba9-b147-889934bdce51)
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3992] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.3995] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.4004] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.4012] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.4751] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.4756] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:30:31 np0005552052.novalocal NetworkManager[7237]: <info>  [1765294231.4762] device (eth1): Activation: successful, device activated.
Dec 09 15:30:41 np0005552052.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 15:30:46 np0005552052.novalocal systemd[4339]: Starting Mark boot as successful...
Dec 09 15:30:46 np0005552052.novalocal systemd[4339]: Finished Mark boot as successful.
Dec 09 15:30:47 np0005552052.novalocal sshd-session[4350]: Received disconnect from 38.102.83.114 port 48740:11: disconnected by user
Dec 09 15:30:47 np0005552052.novalocal sshd-session[4350]: Disconnected from user zuul 38.102.83.114 port 48740
Dec 09 15:30:47 np0005552052.novalocal sshd-session[4335]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:30:47 np0005552052.novalocal systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Dec 09 15:30:48 np0005552052.novalocal sshd-session[7336]: Accepted publickey for zuul from 38.102.83.114 port 41596 ssh2: RSA SHA256:Hm0y35I6QsPK80/qTWUGGvHfgip63xl7qy6rvlCkCac
Dec 09 15:30:48 np0005552052.novalocal systemd-logind[786]: New session 3 of user zuul.
Dec 09 15:30:48 np0005552052.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 09 15:30:48 np0005552052.novalocal sshd-session[7336]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:30:49 np0005552052.novalocal sudo[7415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckdswczirowmiwjiqrnguyxzrxhcozqe ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 15:30:49 np0005552052.novalocal sudo[7415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:30:49 np0005552052.novalocal python3[7417]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:30:49 np0005552052.novalocal sudo[7415]: pam_unix(sudo:session): session closed for user root
Dec 09 15:30:49 np0005552052.novalocal sudo[7488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prajpgugrulyleamfsgnldltmgdwbgpg ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 15:30:49 np0005552052.novalocal sudo[7488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:30:49 np0005552052.novalocal python3[7490]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765294248.8639238-267-242916082820210/source _original_basename=tmpqgmamgua follow=False checksum=6238896fe532666e5a6f258318d39480ddfdd0f1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:30:49 np0005552052.novalocal sudo[7488]: pam_unix(sudo:session): session closed for user root
Dec 09 15:30:51 np0005552052.novalocal sshd-session[7339]: Connection closed by 38.102.83.114 port 41596
Dec 09 15:30:51 np0005552052.novalocal sshd-session[7336]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:30:51 np0005552052.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 09 15:30:51 np0005552052.novalocal systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Dec 09 15:30:51 np0005552052.novalocal systemd-logind[786]: Removed session 3.
Dec 09 15:32:50 np0005552052.novalocal sshd-session[7516]: Connection closed by 146.190.31.45 port 55098
Dec 09 15:33:32 np0005552052.novalocal sshd-session[7517]: Received disconnect from 45.78.206.111 port 42660:11: Bye Bye [preauth]
Dec 09 15:33:32 np0005552052.novalocal sshd-session[7517]: Disconnected from authenticating user root 45.78.206.111 port 42660 [preauth]
Dec 09 15:33:46 np0005552052.novalocal systemd[4339]: Created slice User Background Tasks Slice.
Dec 09 15:33:46 np0005552052.novalocal systemd[4339]: Starting Cleanup of User's Temporary Files and Directories...
Dec 09 15:33:46 np0005552052.novalocal systemd[4339]: Finished Cleanup of User's Temporary Files and Directories.
Dec 09 15:35:51 np0005552052.novalocal sshd-session[7523]: Connection closed by authenticating user root 146.190.31.45 port 44708 [preauth]
Dec 09 15:36:05 np0005552052.novalocal sshd-session[7525]: Invalid user newusername from 45.78.206.111 port 34362
Dec 09 15:36:05 np0005552052.novalocal sshd-session[7525]: Received disconnect from 45.78.206.111 port 34362:11: Bye Bye [preauth]
Dec 09 15:36:05 np0005552052.novalocal sshd-session[7525]: Disconnected from invalid user newusername 45.78.206.111 port 34362 [preauth]
Dec 09 15:36:33 np0005552052.novalocal sshd-session[7528]: Accepted publickey for zuul from 38.102.83.114 port 45336 ssh2: RSA SHA256:Hm0y35I6QsPK80/qTWUGGvHfgip63xl7qy6rvlCkCac
Dec 09 15:36:33 np0005552052.novalocal systemd-logind[786]: New session 4 of user zuul.
Dec 09 15:36:33 np0005552052.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 09 15:36:33 np0005552052.novalocal sshd-session[7528]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:36:33 np0005552052.novalocal sudo[7555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbgwupquymokrwrseovrmcawbdapslrp ; /usr/bin/python3'
Dec 09 15:36:33 np0005552052.novalocal sudo[7555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:33 np0005552052.novalocal python3[7557]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-9098-5fac-000000001f05-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:36:33 np0005552052.novalocal sudo[7555]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:33 np0005552052.novalocal sudo[7584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wepszspxflmgzdtvwutncuzklexywdpj ; /usr/bin/python3'
Dec 09 15:36:33 np0005552052.novalocal sudo[7584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:33 np0005552052.novalocal python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:36:34 np0005552052.novalocal sudo[7584]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:34 np0005552052.novalocal sudo[7610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erugrpokkksamyegmrqgkyhvzaksolay ; /usr/bin/python3'
Dec 09 15:36:34 np0005552052.novalocal sudo[7610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:34 np0005552052.novalocal python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:36:34 np0005552052.novalocal sudo[7610]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:34 np0005552052.novalocal sudo[7636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pausxkvcuwtkmdrtqlaeoijdhzrcditx ; /usr/bin/python3'
Dec 09 15:36:34 np0005552052.novalocal sudo[7636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:34 np0005552052.novalocal python3[7638]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:36:34 np0005552052.novalocal sudo[7636]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:34 np0005552052.novalocal sudo[7662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdbsedgfppjroxfuajdvgxqtsjnhwntg ; /usr/bin/python3'
Dec 09 15:36:34 np0005552052.novalocal sudo[7662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:34 np0005552052.novalocal python3[7664]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:36:34 np0005552052.novalocal sudo[7662]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:35 np0005552052.novalocal sudo[7688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdeiohktyqcqgzsiocukkhhadsjzauxp ; /usr/bin/python3'
Dec 09 15:36:35 np0005552052.novalocal sudo[7688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:35 np0005552052.novalocal python3[7690]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:36:35 np0005552052.novalocal sudo[7688]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:35 np0005552052.novalocal sudo[7766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zotmaahfjzahbnsozszbjwjbegqcxxoi ; /usr/bin/python3'
Dec 09 15:36:35 np0005552052.novalocal sudo[7766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:35 np0005552052.novalocal python3[7768]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:36:35 np0005552052.novalocal sudo[7766]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:35 np0005552052.novalocal sudo[7839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdkgtpsyvgeltjocswhbkissnvytydsh ; /usr/bin/python3'
Dec 09 15:36:35 np0005552052.novalocal sudo[7839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:36 np0005552052.novalocal python3[7841]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765294595.4880395-480-136729224398197/source _original_basename=tmpvlpauz5h follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:36:36 np0005552052.novalocal sudo[7839]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:36 np0005552052.novalocal sudo[7889]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdnzlfamdrgqepjgylbrybidnvdhogxs ; /usr/bin/python3'
Dec 09 15:36:36 np0005552052.novalocal sudo[7889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:37 np0005552052.novalocal python3[7891]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 15:36:37 np0005552052.novalocal systemd[1]: Reloading.
Dec 09 15:36:37 np0005552052.novalocal systemd-rc-local-generator[7912]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:36:37 np0005552052.novalocal sudo[7889]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:38 np0005552052.novalocal sudo[7946]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmcvhnqpqmytcwxtaxrrzkqvcfexhxtt ; /usr/bin/python3'
Dec 09 15:36:38 np0005552052.novalocal sudo[7946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:38 np0005552052.novalocal python3[7948]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 09 15:36:38 np0005552052.novalocal sudo[7946]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:38 np0005552052.novalocal sudo[7972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdwvsbotjtpsrapsurqvmdwazkeonqbd ; /usr/bin/python3'
Dec 09 15:36:38 np0005552052.novalocal sudo[7972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:38 np0005552052.novalocal python3[7974]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:36:38 np0005552052.novalocal sudo[7972]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:39 np0005552052.novalocal sudo[8000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouluackigjyjiddlsdfttviucxpbrqcd ; /usr/bin/python3'
Dec 09 15:36:39 np0005552052.novalocal sudo[8000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:39 np0005552052.novalocal python3[8002]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:36:39 np0005552052.novalocal sudo[8000]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:39 np0005552052.novalocal sudo[8028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehpdztumxdloxozsxvqkjwrgcmfrnovr ; /usr/bin/python3'
Dec 09 15:36:39 np0005552052.novalocal sudo[8028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:39 np0005552052.novalocal python3[8030]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:36:39 np0005552052.novalocal sudo[8028]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:39 np0005552052.novalocal sudo[8056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rimxjejrbgqcyzqfjaswyaahohcrybua ; /usr/bin/python3'
Dec 09 15:36:39 np0005552052.novalocal sudo[8056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:39 np0005552052.novalocal python3[8058]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:36:39 np0005552052.novalocal sudo[8056]: pam_unix(sudo:session): session closed for user root
Dec 09 15:36:40 np0005552052.novalocal python3[8085]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-9098-5fac-000000001f0c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:36:40 np0005552052.novalocal python3[8115]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 15:36:42 np0005552052.novalocal sshd-session[8119]: Connection closed by authenticating user root 146.190.31.45 port 53072 [preauth]
Dec 09 15:36:42 np0005552052.novalocal sshd-session[7531]: Connection closed by 38.102.83.114 port 45336
Dec 09 15:36:42 np0005552052.novalocal sshd-session[7528]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:36:42 np0005552052.novalocal systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Dec 09 15:36:42 np0005552052.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 09 15:36:42 np0005552052.novalocal systemd[1]: session-4.scope: Consumed 4.004s CPU time.
Dec 09 15:36:42 np0005552052.novalocal systemd-logind[786]: Removed session 4.
Dec 09 15:36:44 np0005552052.novalocal sshd-session[8125]: Accepted publickey for zuul from 38.102.83.114 port 47796 ssh2: RSA SHA256:Hm0y35I6QsPK80/qTWUGGvHfgip63xl7qy6rvlCkCac
Dec 09 15:36:44 np0005552052.novalocal systemd-logind[786]: New session 5 of user zuul.
Dec 09 15:36:44 np0005552052.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 09 15:36:44 np0005552052.novalocal sshd-session[8125]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:36:44 np0005552052.novalocal sudo[8152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ierzgdsfgtkbsnqgcqfxzalojnnhhiye ; /usr/bin/python3'
Dec 09 15:36:44 np0005552052.novalocal sudo[8152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:36:44 np0005552052.novalocal python3[8154]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 15:37:05 np0005552052.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 09 15:37:05 np0005552052.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 15:37:05 np0005552052.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 15:37:05 np0005552052.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 15:37:05 np0005552052.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 15:37:05 np0005552052.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 15:37:05 np0005552052.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 15:37:05 np0005552052.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 15:37:14 np0005552052.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 09 15:37:14 np0005552052.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 15:37:14 np0005552052.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 15:37:14 np0005552052.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 15:37:14 np0005552052.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 15:37:14 np0005552052.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 15:37:14 np0005552052.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 15:37:14 np0005552052.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 15:37:26 np0005552052.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 09 15:37:26 np0005552052.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 15:37:26 np0005552052.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 15:37:26 np0005552052.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 15:37:26 np0005552052.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 15:37:26 np0005552052.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 15:37:26 np0005552052.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 15:37:26 np0005552052.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 15:37:27 np0005552052.novalocal setsebool[8220]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 09 15:37:27 np0005552052.novalocal setsebool[8220]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 09 15:37:34 np0005552052.novalocal sshd-session[8229]: Connection closed by authenticating user root 146.190.31.45 port 34864 [preauth]
Dec 09 15:37:40 np0005552052.novalocal kernel: SELinux:  Converting 389 SID table entries...
Dec 09 15:37:40 np0005552052.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 15:37:40 np0005552052.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 15:37:40 np0005552052.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 15:37:40 np0005552052.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 15:37:40 np0005552052.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 15:37:40 np0005552052.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 15:37:40 np0005552052.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 15:38:00 np0005552052.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 09 15:38:00 np0005552052.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 15:38:00 np0005552052.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 09 15:38:00 np0005552052.novalocal systemd[1]: Reloading.
Dec 09 15:38:00 np0005552052.novalocal systemd-rc-local-generator[8971]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:38:01 np0005552052.novalocal systemd[1]: Starting dnf makecache...
Dec 09 15:38:01 np0005552052.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 15:38:01 np0005552052.novalocal dnf[9112]: Failed determining last makecache time.
Dec 09 15:38:01 np0005552052.novalocal dnf[9112]: CentOS Stream 9 - BaseOS                         61 kB/s | 6.4 kB     00:00
Dec 09 15:38:02 np0005552052.novalocal dnf[9112]: CentOS Stream 9 - AppStream                      29 kB/s | 7.1 kB     00:00
Dec 09 15:38:02 np0005552052.novalocal dnf[9112]: CentOS Stream 9 - CRB                            64 kB/s | 6.3 kB     00:00
Dec 09 15:38:02 np0005552052.novalocal sudo[8152]: pam_unix(sudo:session): session closed for user root
Dec 09 15:38:02 np0005552052.novalocal dnf[9112]: CentOS Stream 9 - Extras packages                69 kB/s | 8.3 kB     00:00
Dec 09 15:38:02 np0005552052.novalocal dnf[9112]: Metadata cache created.
Dec 09 15:38:02 np0005552052.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 09 15:38:02 np0005552052.novalocal systemd[1]: Finished dnf makecache.
Dec 09 15:38:03 np0005552052.novalocal python3[11125]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-7c2c-1a6a-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:38:03 np0005552052.novalocal kernel: evm: overlay not supported
Dec 09 15:38:04 np0005552052.novalocal systemd[4339]: Starting D-Bus User Message Bus...
Dec 09 15:38:04 np0005552052.novalocal dbus-broker-launch[12153]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 09 15:38:04 np0005552052.novalocal dbus-broker-launch[12153]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 09 15:38:04 np0005552052.novalocal systemd[4339]: Started D-Bus User Message Bus.
Dec 09 15:38:04 np0005552052.novalocal dbus-broker-lau[12153]: Ready
Dec 09 15:38:04 np0005552052.novalocal systemd[4339]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 09 15:38:04 np0005552052.novalocal systemd[4339]: Created slice Slice /user.
Dec 09 15:38:04 np0005552052.novalocal systemd[4339]: podman-12077.scope: unit configures an IP firewall, but not running as root.
Dec 09 15:38:04 np0005552052.novalocal systemd[4339]: (This warning is only shown for the first unit using IP firewalling.)
Dec 09 15:38:04 np0005552052.novalocal systemd[4339]: Started podman-12077.scope.
Dec 09 15:38:04 np0005552052.novalocal systemd[4339]: Started podman-pause-305cba0a.scope.
Dec 09 15:38:05 np0005552052.novalocal sudo[13056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szccmsnueujfrzqfeuxmnzwwpinjinkb ; /usr/bin/python3'
Dec 09 15:38:05 np0005552052.novalocal sudo[13056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:38:05 np0005552052.novalocal python3[13086]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.234:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.234:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:38:05 np0005552052.novalocal python3[13086]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 09 15:38:05 np0005552052.novalocal sudo[13056]: pam_unix(sudo:session): session closed for user root
Dec 09 15:38:05 np0005552052.novalocal sshd-session[8128]: Connection closed by 38.102.83.114 port 47796
Dec 09 15:38:05 np0005552052.novalocal sshd-session[8125]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:38:05 np0005552052.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 09 15:38:05 np0005552052.novalocal systemd[1]: session-5.scope: Consumed 1min 9.777s CPU time.
Dec 09 15:38:05 np0005552052.novalocal systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Dec 09 15:38:05 np0005552052.novalocal systemd-logind[786]: Removed session 5.
Dec 09 15:38:23 np0005552052.novalocal sshd-session[21117]: Connection closed by 38.102.83.236 port 49214 [preauth]
Dec 09 15:38:23 np0005552052.novalocal sshd-session[21123]: Connection closed by 38.102.83.236 port 49230 [preauth]
Dec 09 15:38:23 np0005552052.novalocal sshd-session[21119]: Unable to negotiate with 38.102.83.236 port 49236: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 09 15:38:23 np0005552052.novalocal sshd-session[21121]: Unable to negotiate with 38.102.83.236 port 49238: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 09 15:38:23 np0005552052.novalocal sshd-session[21124]: Unable to negotiate with 38.102.83.236 port 49250: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 09 15:38:24 np0005552052.novalocal sshd-session[20735]: Invalid user admin from 78.128.112.74 port 45320
Dec 09 15:38:24 np0005552052.novalocal sshd-session[20735]: Connection closed by invalid user admin 78.128.112.74 port 45320 [preauth]
Dec 09 15:38:24 np0005552052.novalocal sshd-session[21295]: Connection closed by authenticating user root 146.190.31.45 port 36996 [preauth]
Dec 09 15:38:28 np0005552052.novalocal sshd-session[22695]: Accepted publickey for zuul from 38.102.83.114 port 56138 ssh2: RSA SHA256:Hm0y35I6QsPK80/qTWUGGvHfgip63xl7qy6rvlCkCac
Dec 09 15:38:28 np0005552052.novalocal systemd-logind[786]: New session 6 of user zuul.
Dec 09 15:38:28 np0005552052.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 09 15:38:28 np0005552052.novalocal sshd-session[22695]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:38:28 np0005552052.novalocal python3[22794]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUz3udIgzrDC8b/RFp/5o6W7tEirnT7wz0cn6AfNNFq3towKODMoUjUBW7ka8FKan1XRHJqAfGZXD7kD/T6Hnw= zuul@np0005552051.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:38:29 np0005552052.novalocal sudo[22960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgndyixiphvcdxhobkkjhbxpouokjnsa ; /usr/bin/python3'
Dec 09 15:38:29 np0005552052.novalocal sudo[22960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:38:29 np0005552052.novalocal python3[22969]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUz3udIgzrDC8b/RFp/5o6W7tEirnT7wz0cn6AfNNFq3towKODMoUjUBW7ka8FKan1XRHJqAfGZXD7kD/T6Hnw= zuul@np0005552051.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:38:29 np0005552052.novalocal sudo[22960]: pam_unix(sudo:session): session closed for user root
Dec 09 15:38:29 np0005552052.novalocal sudo[23259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjigimbufrtojeirqsexgugagrfobcx ; /usr/bin/python3'
Dec 09 15:38:29 np0005552052.novalocal sudo[23259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:38:30 np0005552052.novalocal python3[23268]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005552052.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 09 15:38:30 np0005552052.novalocal useradd[23337]: new group: name=cloud-admin, GID=1002
Dec 09 15:38:30 np0005552052.novalocal useradd[23337]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 09 15:38:30 np0005552052.novalocal sudo[23259]: pam_unix(sudo:session): session closed for user root
Dec 09 15:38:30 np0005552052.novalocal sudo[23477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epgnurcwdigfjwznhwmmfgnxadgjkqtj ; /usr/bin/python3'
Dec 09 15:38:30 np0005552052.novalocal sudo[23477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:38:30 np0005552052.novalocal python3[23487]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUz3udIgzrDC8b/RFp/5o6W7tEirnT7wz0cn6AfNNFq3towKODMoUjUBW7ka8FKan1XRHJqAfGZXD7kD/T6Hnw= zuul@np0005552051.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 15:38:30 np0005552052.novalocal sudo[23477]: pam_unix(sudo:session): session closed for user root
Dec 09 15:38:30 np0005552052.novalocal sudo[23763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spmepleyzmhfsjescldivczhsgqutqks ; /usr/bin/python3'
Dec 09 15:38:30 np0005552052.novalocal sudo[23763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:38:31 np0005552052.novalocal python3[23773]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:38:31 np0005552052.novalocal sudo[23763]: pam_unix(sudo:session): session closed for user root
Dec 09 15:38:31 np0005552052.novalocal sudo[24051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecowedststicjgauwbcvcrrjaaxkzqnq ; /usr/bin/python3'
Dec 09 15:38:31 np0005552052.novalocal sudo[24051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:38:31 np0005552052.novalocal python3[24059]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765294710.7422647-135-203620807452659/source _original_basename=tmp523mw4d7 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:38:31 np0005552052.novalocal sudo[24051]: pam_unix(sudo:session): session closed for user root
Dec 09 15:38:32 np0005552052.novalocal sudo[24348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqgvxhlgcikajcyupyrqcxbqtbranxed ; /usr/bin/python3'
Dec 09 15:38:32 np0005552052.novalocal sudo[24348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:38:32 np0005552052.novalocal python3[24357]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 09 15:38:32 np0005552052.novalocal systemd[1]: Starting Hostname Service...
Dec 09 15:38:32 np0005552052.novalocal systemd[1]: Started Hostname Service.
Dec 09 15:38:32 np0005552052.novalocal systemd-hostnamed[24453]: Changed pretty hostname to 'compute-0'
Dec 09 15:38:32 compute-0 systemd-hostnamed[24453]: Hostname set to <compute-0> (static)
Dec 09 15:38:32 compute-0 NetworkManager[7237]: <info>  [1765294712.6489] hostname: static hostname changed from "np0005552052.novalocal" to "compute-0"
Dec 09 15:38:32 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 15:38:32 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 15:38:32 compute-0 sudo[24348]: pam_unix(sudo:session): session closed for user root
Dec 09 15:38:32 compute-0 sshd-session[22736]: Connection closed by 38.102.83.114 port 56138
Dec 09 15:38:32 compute-0 sshd-session[22695]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:38:32 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 09 15:38:32 compute-0 systemd[1]: session-6.scope: Consumed 2.345s CPU time.
Dec 09 15:38:32 compute-0 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Dec 09 15:38:32 compute-0 systemd-logind[786]: Removed session 6.
Dec 09 15:38:42 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 15:38:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 15:38:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 15:38:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 57.475s CPU time.
Dec 09 15:38:49 compute-0 systemd[1]: run-r5dcb5bd38c6f4dbb8e02b4fd30b49955.service: Deactivated successfully.
Dec 09 15:39:02 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 15:39:12 compute-0 sshd-session[30028]: Connection closed by authenticating user root 146.190.31.45 port 43204 [preauth]
Dec 09 15:39:58 compute-0 sshd-session[30032]: Connection closed by authenticating user root 146.190.31.45 port 60514 [preauth]
Dec 09 15:40:01 compute-0 anacron[4310]: Job `cron.daily' started
Dec 09 15:40:01 compute-0 anacron[4310]: Job `cron.daily' terminated
Dec 09 15:40:42 compute-0 sshd-session[30037]: Connection closed by authenticating user root 146.190.31.45 port 34366 [preauth]
Dec 09 15:40:48 compute-0 sshd[1005]: Timeout before authentication for connection from 45.78.206.111 to 38.102.83.184, pid = 28542
Dec 09 15:41:13 compute-0 sshd[1005]: drop connection #0 from [45.78.206.111]:45592 on [38.102.83.184]:22 penalty: exceeded LoginGraceTime
Dec 09 15:41:28 compute-0 sshd-session[30041]: Connection closed by authenticating user root 146.190.31.45 port 38094 [preauth]
Dec 09 15:42:14 compute-0 sshd-session[30043]: Connection closed by authenticating user root 146.190.31.45 port 39004 [preauth]
Dec 09 15:42:23 compute-0 sshd-session[30045]: Accepted publickey for zuul from 38.102.83.236 port 57618 ssh2: RSA SHA256:Hm0y35I6QsPK80/qTWUGGvHfgip63xl7qy6rvlCkCac
Dec 09 15:42:23 compute-0 systemd-logind[786]: New session 7 of user zuul.
Dec 09 15:42:23 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 09 15:42:23 compute-0 sshd-session[30045]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:42:24 compute-0 python3[30121]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:42:25 compute-0 sudo[30235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poobdeczetkoeagclvgtxdhllsjyfrrw ; /usr/bin/python3'
Dec 09 15:42:25 compute-0 sudo[30235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:26 compute-0 python3[30237]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:42:26 compute-0 sudo[30235]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:26 compute-0 sudo[30308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btnnywcfrvdljjkdjbxkxbihsdxfexlw ; /usr/bin/python3'
Dec 09 15:42:26 compute-0 sudo[30308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:26 compute-0 python3[30310]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765294945.7363133-33663-160139091118366/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:42:26 compute-0 sudo[30308]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:26 compute-0 sudo[30334]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoycoscsvgygrbtpzjandbynrylzzxdc ; /usr/bin/python3'
Dec 09 15:42:26 compute-0 sudo[30334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:26 compute-0 python3[30336]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:42:26 compute-0 sudo[30334]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:26 compute-0 sudo[30407]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzjbdfvhuhccyyynxfuunrhhqfgfpqc ; /usr/bin/python3'
Dec 09 15:42:27 compute-0 sudo[30407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:27 compute-0 python3[30409]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765294945.7363133-33663-160139091118366/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:42:27 compute-0 sudo[30407]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:27 compute-0 sudo[30433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aozfjfscrgumbnrfdxjgzqkpxmxlnala ; /usr/bin/python3'
Dec 09 15:42:27 compute-0 sudo[30433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:27 compute-0 python3[30435]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:42:27 compute-0 sudo[30433]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:27 compute-0 sudo[30506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgyhppblrhuhmdskcsilwmfcutdfxym ; /usr/bin/python3'
Dec 09 15:42:27 compute-0 sudo[30506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:27 compute-0 python3[30508]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765294945.7363133-33663-160139091118366/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:42:27 compute-0 sudo[30506]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:27 compute-0 sudo[30532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gevprlnqpwynkmazynzravdorxomxtmd ; /usr/bin/python3'
Dec 09 15:42:27 compute-0 sudo[30532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:28 compute-0 python3[30534]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:42:28 compute-0 sudo[30532]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:28 compute-0 sudo[30605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzdbquthyyeqwzttsolauskldtglexuu ; /usr/bin/python3'
Dec 09 15:42:28 compute-0 sudo[30605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:28 compute-0 python3[30607]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765294945.7363133-33663-160139091118366/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:42:28 compute-0 sudo[30605]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:28 compute-0 sudo[30631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fafyllfungzrhoqhcmjsryfnjkdwaddi ; /usr/bin/python3'
Dec 09 15:42:28 compute-0 sudo[30631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:28 compute-0 python3[30633]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:42:28 compute-0 sudo[30631]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:28 compute-0 sudo[30704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjxhjgxvnqajzykcyqlkonzbyymcoauj ; /usr/bin/python3'
Dec 09 15:42:28 compute-0 sudo[30704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:29 compute-0 python3[30706]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765294945.7363133-33663-160139091118366/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:42:29 compute-0 sudo[30704]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:29 compute-0 sudo[30730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aceimczjyinrcppifqhtvnzmeukktuyz ; /usr/bin/python3'
Dec 09 15:42:29 compute-0 sudo[30730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:29 compute-0 python3[30732]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:42:29 compute-0 sudo[30730]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:29 compute-0 sudo[30803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgilxdmsgwphufdmgbjigyiodmclmjuq ; /usr/bin/python3'
Dec 09 15:42:29 compute-0 sudo[30803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:29 compute-0 python3[30805]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765294945.7363133-33663-160139091118366/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:42:29 compute-0 sudo[30803]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:29 compute-0 sudo[30829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cytjbirvzycnqwanqxxnazhnrbmeocjx ; /usr/bin/python3'
Dec 09 15:42:29 compute-0 sudo[30829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:29 compute-0 python3[30831]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 15:42:29 compute-0 sudo[30829]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:30 compute-0 sudo[30902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muxvihyedvuaesuiedavxgwmljtwfusi ; /usr/bin/python3'
Dec 09 15:42:30 compute-0 sudo[30902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:42:30 compute-0 python3[30904]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765294945.7363133-33663-160139091118366/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:42:30 compute-0 sudo[30902]: pam_unix(sudo:session): session closed for user root
Dec 09 15:42:32 compute-0 sshd-session[30929]: Unable to negotiate with 192.168.122.11 port 56672: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 09 15:42:32 compute-0 sshd-session[30930]: Connection closed by 192.168.122.11 port 56650 [preauth]
Dec 09 15:42:32 compute-0 sshd-session[30932]: Connection closed by 192.168.122.11 port 56652 [preauth]
Dec 09 15:42:32 compute-0 sshd-session[30931]: Unable to negotiate with 192.168.122.11 port 56654: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 09 15:42:32 compute-0 sshd-session[30933]: Unable to negotiate with 192.168.122.11 port 56670: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 09 15:42:46 compute-0 python3[30962]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:43:02 compute-0 sshd-session[30964]: Connection closed by authenticating user root 146.190.31.45 port 51468 [preauth]
Dec 09 15:43:50 compute-0 sshd-session[30967]: Connection closed by authenticating user root 146.190.31.45 port 52398 [preauth]
Dec 09 15:44:13 compute-0 sshd-session[30969]: Connection closed by 171.253.168.86 port 39450
Dec 09 15:44:37 compute-0 sshd-session[30970]: Connection closed by authenticating user root 146.190.31.45 port 49014 [preauth]
Dec 09 15:45:23 compute-0 sshd-session[30972]: Connection closed by authenticating user root 146.190.31.45 port 33626 [preauth]
Dec 09 15:46:07 compute-0 sshd-session[30974]: Connection closed by authenticating user root 146.190.31.45 port 60962 [preauth]
Dec 09 15:46:27 compute-0 sshd-session[30977]: Invalid user ubuntu from 45.78.206.111 port 46500
Dec 09 15:46:28 compute-0 sshd-session[30977]: Received disconnect from 45.78.206.111 port 46500:11: Bye Bye [preauth]
Dec 09 15:46:28 compute-0 sshd-session[30977]: Disconnected from invalid user ubuntu 45.78.206.111 port 46500 [preauth]
Dec 09 15:46:51 compute-0 sshd-session[30980]: Connection closed by authenticating user root 146.190.31.45 port 58502 [preauth]
Dec 09 15:47:35 compute-0 sshd-session[30982]: Connection closed by authenticating user root 146.190.31.45 port 37634 [preauth]
Dec 09 15:47:45 compute-0 sshd-session[30048]: Received disconnect from 38.102.83.236 port 57618:11: disconnected by user
Dec 09 15:47:45 compute-0 sshd-session[30048]: Disconnected from user zuul 38.102.83.236 port 57618
Dec 09 15:47:45 compute-0 sshd-session[30045]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:47:45 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 09 15:47:45 compute-0 systemd[1]: session-7.scope: Consumed 5.206s CPU time.
Dec 09 15:47:45 compute-0 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Dec 09 15:47:45 compute-0 systemd-logind[786]: Removed session 7.
Dec 09 15:48:19 compute-0 sshd-session[30984]: Connection closed by authenticating user root 146.190.31.45 port 60156 [preauth]
Dec 09 15:49:06 compute-0 sshd-session[30987]: Connection closed by authenticating user root 146.190.31.45 port 36442 [preauth]
Dec 09 15:49:08 compute-0 sshd-session[30986]: Connection closed by 45.78.206.111 port 59916 [preauth]
Dec 09 15:49:54 compute-0 sshd-session[30990]: Connection closed by authenticating user root 146.190.31.45 port 52822 [preauth]
Dec 09 15:50:42 compute-0 sshd-session[30993]: Connection closed by authenticating user root 146.190.31.45 port 46552 [preauth]
Dec 09 15:51:28 compute-0 sshd-session[30996]: Invalid user admin from 146.190.31.45 port 35644
Dec 09 15:51:28 compute-0 sshd-session[30996]: Connection closed by invalid user admin 146.190.31.45 port 35644 [preauth]
Dec 09 15:51:40 compute-0 sshd-session[30998]: Invalid user adminuser from 45.78.206.111 port 48180
Dec 09 15:51:40 compute-0 sshd-session[30998]: Received disconnect from 45.78.206.111 port 48180:11: Bye Bye [preauth]
Dec 09 15:51:40 compute-0 sshd-session[30998]: Disconnected from invalid user adminuser 45.78.206.111 port 48180 [preauth]
Dec 09 15:52:13 compute-0 sshd-session[31001]: Invalid user admin from 146.190.31.45 port 57400
Dec 09 15:52:13 compute-0 sshd-session[31001]: Connection closed by invalid user admin 146.190.31.45 port 57400 [preauth]
Dec 09 15:52:58 compute-0 sshd-session[31004]: Invalid user admin from 146.190.31.45 port 38672
Dec 09 15:52:58 compute-0 sshd-session[31004]: Connection closed by invalid user admin 146.190.31.45 port 38672 [preauth]
Dec 09 15:53:42 compute-0 sshd-session[31007]: Invalid user admin from 146.190.31.45 port 41122
Dec 09 15:53:42 compute-0 sshd-session[31007]: Connection closed by invalid user admin 146.190.31.45 port 41122 [preauth]
Dec 09 15:54:06 compute-0 sshd-session[31009]: Received disconnect from 45.78.206.111 port 49128:11: Bye Bye [preauth]
Dec 09 15:54:06 compute-0 sshd-session[31009]: Disconnected from authenticating user root 45.78.206.111 port 49128 [preauth]
Dec 09 15:54:23 compute-0 sshd-session[31011]: Accepted publickey for zuul from 192.168.122.30 port 46660 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 15:54:23 compute-0 systemd-logind[786]: New session 8 of user zuul.
Dec 09 15:54:23 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 09 15:54:23 compute-0 sshd-session[31011]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:54:24 compute-0 python3.9[31164]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:54:25 compute-0 sudo[31343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqcnfoekbhkrsucvfylqvoynfvznttbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295664.953932-32-145899284704896/AnsiballZ_command.py'
Dec 09 15:54:25 compute-0 sudo[31343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:25 compute-0 python3.9[31345]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:54:26 compute-0 sshd-session[31356]: Invalid user admin from 146.190.31.45 port 51402
Dec 09 15:54:26 compute-0 sshd-session[31356]: Connection closed by invalid user admin 146.190.31.45 port 51402 [preauth]
Dec 09 15:54:33 compute-0 sudo[31343]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:33 compute-0 sshd-session[31014]: Connection closed by 192.168.122.30 port 46660
Dec 09 15:54:33 compute-0 sshd-session[31011]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:54:33 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 09 15:54:33 compute-0 systemd[1]: session-8.scope: Consumed 8.083s CPU time.
Dec 09 15:54:33 compute-0 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Dec 09 15:54:33 compute-0 systemd-logind[786]: Removed session 8.
Dec 09 15:54:49 compute-0 sshd-session[31404]: Accepted publickey for zuul from 192.168.122.30 port 39782 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 15:54:49 compute-0 systemd-logind[786]: New session 9 of user zuul.
Dec 09 15:54:49 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 09 15:54:49 compute-0 sshd-session[31404]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:54:50 compute-0 python3.9[31557]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 09 15:54:51 compute-0 python3.9[31731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:54:52 compute-0 sudo[31881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggdjnneancmkbtdwcocwgfbklnxqgqbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295691.7694244-45-169858030398324/AnsiballZ_command.py'
Dec 09 15:54:52 compute-0 sudo[31881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:52 compute-0 python3.9[31883]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:54:52 compute-0 sudo[31881]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:53 compute-0 sudo[32034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emtmvpwlqinfbsebphqbnrprufdtsbcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295693.1608217-57-140326432657112/AnsiballZ_stat.py'
Dec 09 15:54:53 compute-0 sudo[32034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:53 compute-0 python3.9[32036]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 15:54:53 compute-0 sudo[32034]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:54 compute-0 sudo[32186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amuodulccvozfddylvckjjashdlgxkog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295694.099585-65-202711356853506/AnsiballZ_file.py'
Dec 09 15:54:54 compute-0 sudo[32186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:54 compute-0 python3.9[32188]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:54:54 compute-0 sudo[32186]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:55 compute-0 sudo[32338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndwubqszuqvkjuitmpokweijjteuswut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295694.9344-73-236900553681000/AnsiballZ_stat.py'
Dec 09 15:54:55 compute-0 sudo[32338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:55 compute-0 python3.9[32340]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:54:55 compute-0 sudo[32338]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:55 compute-0 sudo[32461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhxhvhfezwxonsinyvjeoiyxzvrquun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295694.9344-73-236900553681000/AnsiballZ_copy.py'
Dec 09 15:54:55 compute-0 sudo[32461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:56 compute-0 python3.9[32463]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765295694.9344-73-236900553681000/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:54:56 compute-0 sudo[32461]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:56 compute-0 sudo[32613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kundygqnwhwmyjbabofephhaangznumv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295696.3310776-88-132851343880529/AnsiballZ_setup.py'
Dec 09 15:54:56 compute-0 sudo[32613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:56 compute-0 python3.9[32615]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:54:57 compute-0 sudo[32613]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:57 compute-0 sudo[32769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpxsqpvdpodyfhjjihvpadxhhmfovrcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295697.3579414-96-189075984282281/AnsiballZ_file.py'
Dec 09 15:54:57 compute-0 sudo[32769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:57 compute-0 python3.9[32771]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:54:57 compute-0 sudo[32769]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:58 compute-0 sudo[32921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bymkmupofyhnkalcdweljuuoahnkxykb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295698.0719697-105-83191924769930/AnsiballZ_file.py'
Dec 09 15:54:58 compute-0 sudo[32921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:54:58 compute-0 python3.9[32923]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:54:58 compute-0 sudo[32921]: pam_unix(sudo:session): session closed for user root
Dec 09 15:54:59 compute-0 python3.9[33073]: ansible-ansible.builtin.service_facts Invoked
Dec 09 15:55:04 compute-0 python3.9[33326]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:55:05 compute-0 python3.9[33476]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:55:06 compute-0 python3.9[33630]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:55:07 compute-0 sudo[33786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alpileppyenywiqukqlvnoalvlqsmqpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295707.076804-153-113108562831382/AnsiballZ_setup.py'
Dec 09 15:55:07 compute-0 sudo[33786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:55:07 compute-0 python3.9[33788]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 15:55:08 compute-0 sudo[33786]: pam_unix(sudo:session): session closed for user root
Dec 09 15:55:08 compute-0 sudo[33870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvczdwanwihtiwsvjijhpziiwfnlykgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295707.076804-153-113108562831382/AnsiballZ_dnf.py'
Dec 09 15:55:08 compute-0 sudo[33870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:55:08 compute-0 python3.9[33872]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:55:12 compute-0 sshd-session[33936]: Invalid user admin from 146.190.31.45 port 56230
Dec 09 15:55:12 compute-0 sshd-session[33936]: Connection closed by invalid user admin 146.190.31.45 port 56230 [preauth]
Dec 09 15:55:53 compute-0 systemd[1]: Reloading.
Dec 09 15:55:53 compute-0 systemd-rc-local-generator[34067]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:55:53 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 09 15:55:53 compute-0 systemd[1]: Reloading.
Dec 09 15:55:53 compute-0 systemd-rc-local-generator[34110]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:55:53 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 09 15:55:53 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 09 15:55:53 compute-0 systemd[1]: Reloading.
Dec 09 15:55:53 compute-0 systemd-rc-local-generator[34152]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:55:53 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 09 15:55:54 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Dec 09 15:55:54 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Dec 09 15:55:54 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Dec 09 15:55:58 compute-0 sshd-session[34183]: Invalid user admin from 146.190.31.45 port 45598
Dec 09 15:55:58 compute-0 sshd-session[34183]: Connection closed by invalid user admin 146.190.31.45 port 45598 [preauth]
Dec 09 15:56:45 compute-0 sshd-session[34326]: Invalid user admin from 146.190.31.45 port 49536
Dec 09 15:56:45 compute-0 sshd-session[34326]: Connection closed by invalid user admin 146.190.31.45 port 49536 [preauth]
Dec 09 15:56:59 compute-0 kernel: SELinux:  Converting 2720 SID table entries...
Dec 09 15:56:59 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 15:56:59 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 15:56:59 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 15:56:59 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 15:56:59 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 15:56:59 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 15:56:59 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 15:57:00 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 09 15:57:00 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 15:57:00 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 15:57:00 compute-0 systemd[1]: Reloading.
Dec 09 15:57:00 compute-0 systemd-rc-local-generator[34466]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:57:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 15:57:00 compute-0 sudo[33870]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:01 compute-0 sudo[35385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaachomfooycjfkcrtakgjrwcdvldcdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295821.1031823-165-79497146640196/AnsiballZ_command.py'
Dec 09 15:57:01 compute-0 sudo[35385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 15:57:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 15:57:01 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.318s CPU time.
Dec 09 15:57:01 compute-0 systemd[1]: run-r6f4fa43171564f79ab08390b3b001654.service: Deactivated successfully.
Dec 09 15:57:01 compute-0 python3.9[35387]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:57:02 compute-0 sudo[35385]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:03 compute-0 sudo[35667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdlbfwzhltjptctgqwfsyzwblimjjetz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295822.6100292-173-5056775093929/AnsiballZ_selinux.py'
Dec 09 15:57:03 compute-0 sudo[35667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:03 compute-0 python3.9[35669]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 09 15:57:03 compute-0 sudo[35667]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:04 compute-0 sudo[35819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tumvwajwpzgusshzeajautqnxagpnrbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295823.8988886-184-15564494849460/AnsiballZ_command.py'
Dec 09 15:57:04 compute-0 sudo[35819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:04 compute-0 python3.9[35821]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 09 15:57:05 compute-0 sudo[35819]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:05 compute-0 sudo[35972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sppdwqlxhdhxvahisxcjxqhnbxvrmnru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295825.58162-192-250728943059550/AnsiballZ_file.py'
Dec 09 15:57:05 compute-0 sudo[35972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:06 compute-0 python3.9[35974]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:57:06 compute-0 sudo[35972]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:07 compute-0 sudo[36124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsupceiwtzgwijhaopornislbrlxbetl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295826.7840176-200-244629215341479/AnsiballZ_mount.py'
Dec 09 15:57:07 compute-0 sudo[36124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:07 compute-0 python3.9[36126]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 09 15:57:07 compute-0 sudo[36124]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:08 compute-0 sudo[36276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoszpemsaydttnkexhhmierzxcnmhpsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295828.4250677-228-260163232707414/AnsiballZ_file.py'
Dec 09 15:57:08 compute-0 sudo[36276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:08 compute-0 python3.9[36278]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:57:08 compute-0 sudo[36276]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:09 compute-0 sudo[36428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyzxlslewkqbtljriqyxpetgdeqrjxyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295829.2184987-236-151662492328127/AnsiballZ_stat.py'
Dec 09 15:57:09 compute-0 sudo[36428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:09 compute-0 python3.9[36430]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:57:09 compute-0 sudo[36428]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:10 compute-0 sudo[36551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcilpdhebkwabrnvgosryvtvajwwsvak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295829.2184987-236-151662492328127/AnsiballZ_copy.py'
Dec 09 15:57:10 compute-0 sudo[36551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:10 compute-0 python3.9[36554]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765295829.2184987-236-151662492328127/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=00e758ddc55a34ae8ccf237d115f45aaa7d998db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:57:10 compute-0 sudo[36551]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:11 compute-0 sudo[36704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrtbmmhmjcgpqoufkmnevyanooqdniwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295830.8904214-260-75860402982998/AnsiballZ_stat.py'
Dec 09 15:57:11 compute-0 sudo[36704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:13 compute-0 python3.9[36706]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 15:57:13 compute-0 sudo[36704]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:14 compute-0 sudo[36856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcqqwvqejrexmrfkwqhiefdgydirnbfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295833.7721994-268-251077740959603/AnsiballZ_command.py'
Dec 09 15:57:14 compute-0 sudo[36856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:15 compute-0 python3.9[36858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:57:15 compute-0 sudo[36856]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:15 compute-0 sudo[37009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnyogizfguqcaoluvcwqvdvkdfrugigw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295835.6468446-276-177841172321463/AnsiballZ_file.py'
Dec 09 15:57:15 compute-0 sudo[37009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:16 compute-0 python3.9[37011]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:57:16 compute-0 sudo[37009]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:16 compute-0 sudo[37161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyqfctdlgzvjiipzlknjjhzmlsqzhhbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295836.5153813-287-105361572270832/AnsiballZ_getent.py'
Dec 09 15:57:16 compute-0 sudo[37161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:17 compute-0 python3.9[37163]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 09 15:57:17 compute-0 sudo[37161]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:17 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 15:57:17 compute-0 sudo[37315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqetuslkpekcukkicorrxigpyaldprwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295837.3614302-295-157025266922413/AnsiballZ_group.py'
Dec 09 15:57:17 compute-0 sudo[37315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:17 compute-0 python3.9[37317]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 15:57:17 compute-0 groupadd[37318]: group added to /etc/group: name=qemu, GID=107
Dec 09 15:57:17 compute-0 groupadd[37318]: group added to /etc/gshadow: name=qemu
Dec 09 15:57:17 compute-0 groupadd[37318]: new group: name=qemu, GID=107
Dec 09 15:57:18 compute-0 sudo[37315]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:18 compute-0 sudo[37473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egfcfyudzmkqkehumpdegzhbixesvxdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295838.1858132-303-10862123018099/AnsiballZ_user.py'
Dec 09 15:57:18 compute-0 sudo[37473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:18 compute-0 python3.9[37475]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 15:57:18 compute-0 useradd[37477]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 15:57:18 compute-0 sudo[37473]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:19 compute-0 sudo[37633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhbeyktzbbusefjozykikaofnwbyvcqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295839.1365273-311-275756883993245/AnsiballZ_getent.py'
Dec 09 15:57:19 compute-0 sudo[37633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:19 compute-0 python3.9[37635]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 09 15:57:19 compute-0 sudo[37633]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:20 compute-0 sudo[37786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yodiuwkdwxniharpremlazlhzlrlabvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295839.7486117-319-116230279796197/AnsiballZ_group.py'
Dec 09 15:57:20 compute-0 sudo[37786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:20 compute-0 python3.9[37788]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 15:57:20 compute-0 groupadd[37789]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 09 15:57:20 compute-0 groupadd[37789]: group added to /etc/gshadow: name=hugetlbfs
Dec 09 15:57:20 compute-0 groupadd[37789]: new group: name=hugetlbfs, GID=42477
Dec 09 15:57:20 compute-0 sudo[37786]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:20 compute-0 sudo[37944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amrfnetlqnhrrtbxiukpeoikgyeienyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295840.4547722-328-46764980807726/AnsiballZ_file.py'
Dec 09 15:57:20 compute-0 sudo[37944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:20 compute-0 python3.9[37946]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 09 15:57:20 compute-0 sudo[37944]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:21 compute-0 sudo[38096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdcqqywvpkzhkdyzwarnvmvawiyfswif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295841.2128203-339-79068184435069/AnsiballZ_dnf.py'
Dec 09 15:57:21 compute-0 sudo[38096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:21 compute-0 python3.9[38098]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:57:23 compute-0 sudo[38096]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:23 compute-0 sudo[38249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqgkfjbtangyhqwdsegynqcqzmzakjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295843.4953344-347-124958395230663/AnsiballZ_file.py'
Dec 09 15:57:23 compute-0 sudo[38249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:24 compute-0 python3.9[38251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:57:24 compute-0 sudo[38249]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:24 compute-0 sudo[38401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssnrfqkouddzmaatvvobpiaopiylhabi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295844.2042398-355-128350156304764/AnsiballZ_stat.py'
Dec 09 15:57:24 compute-0 sudo[38401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:24 compute-0 python3.9[38403]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:57:24 compute-0 sudo[38401]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:25 compute-0 sudo[38524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rynvyitonkvelygmzgalpptzzxvsnasd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295844.2042398-355-128350156304764/AnsiballZ_copy.py'
Dec 09 15:57:25 compute-0 sudo[38524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:25 compute-0 python3.9[38526]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765295844.2042398-355-128350156304764/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:57:25 compute-0 sudo[38524]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:26 compute-0 sudo[38676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdreezpcgmckybnadbfebqcskdoqcnmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295845.4906251-370-273339161719843/AnsiballZ_systemd.py'
Dec 09 15:57:26 compute-0 sudo[38676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:26 compute-0 python3.9[38678]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 15:57:26 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 09 15:57:26 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 09 15:57:26 compute-0 kernel: Bridge firewalling registered
Dec 09 15:57:26 compute-0 systemd-modules-load[38682]: Inserted module 'br_netfilter'
Dec 09 15:57:26 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 09 15:57:26 compute-0 sudo[38676]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:26 compute-0 sudo[38839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggxavmspzacovksbqwfewtelybymhoch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295846.6860907-378-3322434592810/AnsiballZ_stat.py'
Dec 09 15:57:27 compute-0 sudo[38839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:27 compute-0 python3.9[38841]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:57:27 compute-0 sudo[38839]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:27 compute-0 sudo[38962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpgzivvgwnmwhxplgazfhoidcwutlshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295846.6860907-378-3322434592810/AnsiballZ_copy.py'
Dec 09 15:57:27 compute-0 sudo[38962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:27 compute-0 python3.9[38964]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765295846.6860907-378-3322434592810/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:57:27 compute-0 sudo[38962]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:28 compute-0 sudo[39114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wccmurfvhiqvzkpxejppxwhktkhagfsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295848.0997074-396-158461069358010/AnsiballZ_dnf.py'
Dec 09 15:57:28 compute-0 sudo[39114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:28 compute-0 python3.9[39116]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:57:31 compute-0 sshd-session[39119]: Invalid user admin from 146.190.31.45 port 59014
Dec 09 15:57:31 compute-0 sshd-session[39119]: Connection closed by invalid user admin 146.190.31.45 port 59014 [preauth]
Dec 09 15:57:41 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Dec 09 15:57:41 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Dec 09 15:57:41 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 15:57:41 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 15:57:41 compute-0 systemd[1]: Reloading.
Dec 09 15:57:42 compute-0 systemd-rc-local-generator[39181]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:57:42 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 15:57:42 compute-0 sudo[39114]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:43 compute-0 python3.9[40395]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 15:57:44 compute-0 python3.9[41304]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 09 15:57:44 compute-0 python3.9[42047]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 15:57:45 compute-0 sudo[42960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiicbjmpmausxfeibvuujbateoopplre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295865.1786706-435-150206528719590/AnsiballZ_command.py'
Dec 09 15:57:45 compute-0 sudo[42960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:45 compute-0 python3.9[42963]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:57:45 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 09 15:57:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 15:57:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 15:57:46 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.181s CPU time.
Dec 09 15:57:46 compute-0 systemd[1]: run-r7ae82c921b724cfcb91178b9ee3bf590.service: Deactivated successfully.
Dec 09 15:57:46 compute-0 systemd[1]: Starting Authorization Manager...
Dec 09 15:57:46 compute-0 polkitd[43504]: Started polkitd version 0.117
Dec 09 15:57:46 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 09 15:57:46 compute-0 polkitd[43504]: Loading rules from directory /etc/polkit-1/rules.d
Dec 09 15:57:46 compute-0 polkitd[43504]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 09 15:57:46 compute-0 polkitd[43504]: Finished loading, compiling and executing 2 rules
Dec 09 15:57:46 compute-0 systemd[1]: Started Authorization Manager.
Dec 09 15:57:46 compute-0 polkitd[43504]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 09 15:57:46 compute-0 sudo[42960]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:46 compute-0 sudo[43672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyrmbbsritbbemmnpysbvqsxkbpszwpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295866.546511-444-207831116774633/AnsiballZ_systemd.py'
Dec 09 15:57:46 compute-0 sudo[43672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:47 compute-0 python3.9[43674]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 15:57:47 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 09 15:57:47 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 09 15:57:47 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 09 15:57:47 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 09 15:57:47 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 09 15:57:47 compute-0 sudo[43672]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:48 compute-0 python3.9[43836]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 09 15:57:49 compute-0 sudo[43986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dacrzenrsfqaczbfqtsyitggpklruczz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295869.5654116-501-250753679476344/AnsiballZ_systemd.py'
Dec 09 15:57:49 compute-0 sudo[43986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:50 compute-0 python3.9[43988]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 15:57:50 compute-0 systemd[1]: Reloading.
Dec 09 15:57:50 compute-0 systemd-rc-local-generator[44016]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:57:50 compute-0 sudo[43986]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:50 compute-0 sudo[44174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricjdvjqpjrmxmragpnhsbmtklhbghml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295870.5700238-501-49921648849452/AnsiballZ_systemd.py'
Dec 09 15:57:50 compute-0 sudo[44174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:51 compute-0 python3.9[44176]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 15:57:51 compute-0 systemd[1]: Reloading.
Dec 09 15:57:51 compute-0 systemd-rc-local-generator[44201]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:57:51 compute-0 sudo[44174]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:51 compute-0 sudo[44363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qucnukbjxupplltpefdflgjkqxsbodrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295871.6628754-517-202588193865287/AnsiballZ_command.py'
Dec 09 15:57:51 compute-0 sudo[44363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:52 compute-0 python3.9[44365]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:57:52 compute-0 sudo[44363]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:52 compute-0 sudo[44516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyynjnsluclbgqyigwokftvqoaqxyvmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295872.3059387-525-168845126729487/AnsiballZ_command.py'
Dec 09 15:57:52 compute-0 sudo[44516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:52 compute-0 python3.9[44518]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:57:52 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 09 15:57:52 compute-0 sudo[44516]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:53 compute-0 sudo[44669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrqjheusmwepmhctqkdsvfpmhmoadanr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295873.047371-533-4778244027153/AnsiballZ_command.py'
Dec 09 15:57:53 compute-0 sudo[44669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:53 compute-0 python3.9[44671]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:57:54 compute-0 sudo[44669]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:55 compute-0 sudo[44831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oanelnjjfklwkeaqmsambiasngduzeft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295875.0842614-541-96926292632132/AnsiballZ_command.py'
Dec 09 15:57:55 compute-0 sudo[44831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:55 compute-0 python3.9[44833]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:57:55 compute-0 sudo[44831]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:55 compute-0 sudo[44984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcxveacchmvqqkruxkzwxqudpeyvmvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295875.7008612-549-101812778365623/AnsiballZ_systemd.py'
Dec 09 15:57:55 compute-0 sudo[44984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:57:56 compute-0 python3.9[44986]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 15:57:56 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 09 15:57:56 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 09 15:57:56 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 09 15:57:56 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 09 15:57:56 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 09 15:57:56 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 09 15:57:56 compute-0 sudo[44984]: pam_unix(sudo:session): session closed for user root
Dec 09 15:57:56 compute-0 sshd-session[31407]: Connection closed by 192.168.122.30 port 39782
Dec 09 15:57:56 compute-0 sshd-session[31404]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:57:56 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 09 15:57:56 compute-0 systemd[1]: session-9.scope: Consumed 2min 18.139s CPU time.
Dec 09 15:57:56 compute-0 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Dec 09 15:57:56 compute-0 systemd-logind[786]: Removed session 9.
Dec 09 15:58:01 compute-0 sshd-session[45017]: Received disconnect from 171.253.168.86 port 56732:11:  [preauth]
Dec 09 15:58:01 compute-0 sshd-session[45017]: Disconnected from authenticating user root 171.253.168.86 port 56732 [preauth]
Dec 09 15:58:02 compute-0 sshd-session[45019]: Accepted publickey for zuul from 192.168.122.30 port 37748 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 15:58:02 compute-0 systemd-logind[786]: New session 10 of user zuul.
Dec 09 15:58:02 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 09 15:58:02 compute-0 sshd-session[45019]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:58:03 compute-0 python3.9[45172]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:58:04 compute-0 sudo[45326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtvdyenjnslqwgpyncqlvvdhtqizjto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295884.209848-36-240855205095030/AnsiballZ_getent.py'
Dec 09 15:58:04 compute-0 sudo[45326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:04 compute-0 python3.9[45328]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 09 15:58:04 compute-0 sudo[45326]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:05 compute-0 sudo[45479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uozwbnpnswdbrzjiiexpxtoaqbixdyae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295885.089586-44-50619112604286/AnsiballZ_group.py'
Dec 09 15:58:05 compute-0 sudo[45479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:05 compute-0 python3.9[45481]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 15:58:05 compute-0 groupadd[45482]: group added to /etc/group: name=openvswitch, GID=42476
Dec 09 15:58:05 compute-0 groupadd[45482]: group added to /etc/gshadow: name=openvswitch
Dec 09 15:58:05 compute-0 groupadd[45482]: new group: name=openvswitch, GID=42476
Dec 09 15:58:05 compute-0 sudo[45479]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:06 compute-0 sudo[45637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixnsbnnsmcrfbtdmehkkovsuezxbehof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295885.961549-52-273792018496043/AnsiballZ_user.py'
Dec 09 15:58:06 compute-0 sudo[45637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:06 compute-0 python3.9[45639]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 15:58:06 compute-0 useradd[45641]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 15:58:06 compute-0 useradd[45641]: add 'openvswitch' to group 'hugetlbfs'
Dec 09 15:58:06 compute-0 useradd[45641]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 09 15:58:06 compute-0 sudo[45637]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:07 compute-0 sudo[45797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clrpuhhvgvswdmstbsuedeymuigobewh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295886.9658878-62-51500605727001/AnsiballZ_setup.py'
Dec 09 15:58:07 compute-0 sudo[45797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:07 compute-0 python3.9[45799]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 15:58:07 compute-0 sudo[45797]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:08 compute-0 sudo[45881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjntbitfkatpnfhaspyrhspupxdwvynv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295886.9658878-62-51500605727001/AnsiballZ_dnf.py'
Dec 09 15:58:08 compute-0 sudo[45881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:08 compute-0 python3.9[45883]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 15:58:11 compute-0 sudo[45881]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:11 compute-0 sudo[46045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agdzmnmyunoyyymjwhengynmvutbkoxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295891.3314962-76-8751361686143/AnsiballZ_dnf.py'
Dec 09 15:58:11 compute-0 sudo[46045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:11 compute-0 python3.9[46047]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:58:17 compute-0 sshd-session[46062]: Invalid user admin from 146.190.31.45 port 45646
Dec 09 15:58:17 compute-0 sshd-session[46062]: Connection closed by invalid user admin 146.190.31.45 port 45646 [preauth]
Dec 09 15:58:23 compute-0 kernel: SELinux:  Converting 2732 SID table entries...
Dec 09 15:58:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 15:58:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 15:58:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 15:58:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 15:58:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 15:58:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 15:58:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 15:58:23 compute-0 groupadd[46072]: group added to /etc/group: name=unbound, GID=993
Dec 09 15:58:24 compute-0 groupadd[46072]: group added to /etc/gshadow: name=unbound
Dec 09 15:58:24 compute-0 groupadd[46072]: new group: name=unbound, GID=993
Dec 09 15:58:24 compute-0 useradd[46079]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 09 15:58:24 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 09 15:58:24 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 09 15:58:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 15:58:25 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 15:58:25 compute-0 systemd[1]: Reloading.
Dec 09 15:58:25 compute-0 systemd-rc-local-generator[46574]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:58:25 compute-0 systemd-sysv-generator[46577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 15:58:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 15:58:26 compute-0 sudo[46045]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 15:58:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 15:58:26 compute-0 systemd[1]: run-r42b8e5c8606e4eb3b4cf5165a2ca0b34.service: Deactivated successfully.
Dec 09 15:58:27 compute-0 sudo[47146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnkdaavcgjtnbshzbvlpphgpepugezjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295906.5138958-84-7361154558207/AnsiballZ_systemd.py'
Dec 09 15:58:27 compute-0 sudo[47146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:27 compute-0 python3.9[47148]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 15:58:27 compute-0 systemd[1]: Reloading.
Dec 09 15:58:27 compute-0 systemd-rc-local-generator[47180]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:58:27 compute-0 systemd-sysv-generator[47183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 15:58:27 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 09 15:58:27 compute-0 chown[47190]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 09 15:58:27 compute-0 ovs-ctl[47195]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 09 15:58:27 compute-0 ovs-ctl[47195]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 09 15:58:27 compute-0 ovs-ctl[47195]: Starting ovsdb-server [  OK  ]
Dec 09 15:58:27 compute-0 ovs-vsctl[47244]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 09 15:58:28 compute-0 ovs-vsctl[47264]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"037f0e18-4bfd-4487-a7a8-05ae973391a9\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 09 15:58:28 compute-0 ovs-ctl[47195]: Configuring Open vSwitch system IDs [  OK  ]
Dec 09 15:58:28 compute-0 ovs-ctl[47195]: Enabling remote OVSDB managers [  OK  ]
Dec 09 15:58:28 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 09 15:58:28 compute-0 ovs-vsctl[47270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 09 15:58:28 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 09 15:58:28 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 09 15:58:28 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 09 15:58:28 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 09 15:58:28 compute-0 ovs-ctl[47315]: Inserting openvswitch module [  OK  ]
Dec 09 15:58:28 compute-0 ovs-ctl[47284]: Starting ovs-vswitchd [  OK  ]
Dec 09 15:58:28 compute-0 ovs-vsctl[47332]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 09 15:58:28 compute-0 ovs-ctl[47284]: Enabling remote OVSDB managers [  OK  ]
Dec 09 15:58:28 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 09 15:58:28 compute-0 systemd[1]: Starting Open vSwitch...
Dec 09 15:58:28 compute-0 systemd[1]: Finished Open vSwitch.
Dec 09 15:58:28 compute-0 sudo[47146]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:29 compute-0 python3.9[47484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:58:30 compute-0 sudo[47634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlvqljbarvngjlltvciqzdyobgdpxixy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295909.5163784-102-229937930768117/AnsiballZ_sefcontext.py'
Dec 09 15:58:30 compute-0 sudo[47634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:30 compute-0 python3.9[47636]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 09 15:58:31 compute-0 kernel: SELinux:  Converting 2746 SID table entries...
Dec 09 15:58:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 15:58:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 15:58:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 15:58:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 15:58:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 15:58:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 15:58:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 15:58:31 compute-0 sudo[47634]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:32 compute-0 python3.9[47792]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:58:33 compute-0 sudo[47948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqvfuasdmaywpembmvkylyofoceiuqlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295912.850688-120-243741261852647/AnsiballZ_dnf.py'
Dec 09 15:58:33 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 09 15:58:33 compute-0 sudo[47948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:33 compute-0 python3.9[47950]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:58:34 compute-0 sudo[47948]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:35 compute-0 sudo[48101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yswqlafcholgoajpknksspxpozldealq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295914.8350565-128-92659565099211/AnsiballZ_command.py'
Dec 09 15:58:35 compute-0 sudo[48101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:35 compute-0 python3.9[48103]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:58:36 compute-0 sudo[48101]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:36 compute-0 sudo[48388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohqcjxxvkfzfqgzpuhentallhxypgfzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295916.3773367-136-156484663596940/AnsiballZ_file.py'
Dec 09 15:58:36 compute-0 sudo[48388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:36 compute-0 python3.9[48390]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 09 15:58:37 compute-0 sudo[48388]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:37 compute-0 python3.9[48540]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 15:58:38 compute-0 sudo[48692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nltwbnawwkjmtumswcrvkvthtnztnbrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295918.166786-152-250356233141806/AnsiballZ_dnf.py'
Dec 09 15:58:38 compute-0 sudo[48692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:38 compute-0 python3.9[48694]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:58:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 15:58:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 15:58:40 compute-0 systemd[1]: Reloading.
Dec 09 15:58:41 compute-0 systemd-sysv-generator[48735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 15:58:41 compute-0 systemd-rc-local-generator[48732]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:58:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 15:58:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 15:58:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 15:58:41 compute-0 systemd[1]: run-r11c341c4d3c3440ab6ee38df7bd8c31f.service: Deactivated successfully.
Dec 09 15:58:41 compute-0 sudo[48692]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:42 compute-0 sudo[49008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nebkxsyrpdfvxwzdlvhucqqjicsysdgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295921.801433-160-166052108030803/AnsiballZ_systemd.py'
Dec 09 15:58:42 compute-0 sudo[49008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:42 compute-0 python3.9[49010]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 15:58:42 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 09 15:58:42 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 09 15:58:42 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 09 15:58:42 compute-0 systemd[1]: Stopping Network Manager...
Dec 09 15:58:42 compute-0 NetworkManager[7237]: <info>  [1765295922.4805] caught SIGTERM, shutting down normally.
Dec 09 15:58:42 compute-0 NetworkManager[7237]: <info>  [1765295922.4820] dhcp4 (eth0): canceled DHCP transaction
Dec 09 15:58:42 compute-0 NetworkManager[7237]: <info>  [1765295922.4820] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 15:58:42 compute-0 NetworkManager[7237]: <info>  [1765295922.4820] dhcp4 (eth0): state changed no lease
Dec 09 15:58:42 compute-0 NetworkManager[7237]: <info>  [1765295922.4822] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 15:58:42 compute-0 NetworkManager[7237]: <info>  [1765295922.4891] exiting (success)
Dec 09 15:58:42 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 15:58:42 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 15:58:42 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 09 15:58:42 compute-0 systemd[1]: Stopped Network Manager.
Dec 09 15:58:42 compute-0 systemd[1]: NetworkManager.service: Consumed 10.977s CPU time, 4.0M memory peak, read 0B from disk, written 28.0K to disk.
Dec 09 15:58:42 compute-0 systemd[1]: Starting Network Manager...
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.5851] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:a58260a5-f855-49b9-849b-ff1e8bfdaaf7)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.5856] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.5936] manager[0x55a85b45b000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 15:58:42 compute-0 systemd[1]: Starting Hostname Service...
Dec 09 15:58:42 compute-0 systemd[1]: Started Hostname Service.
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6713] hostname: hostname: using hostnamed
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6714] hostname: static hostname changed from (none) to "compute-0"
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6724] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6732] manager[0x55a85b45b000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6732] manager[0x55a85b45b000]: rfkill: WWAN hardware radio set enabled
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6771] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6787] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6788] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6789] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6790] manager: Networking is enabled by state file
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6794] settings: Loaded settings plugin: keyfile (internal)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6800] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6846] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6865] dhcp: init: Using DHCP client 'internal'
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6871] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6884] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6894] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6912] device (lo): Activation: starting connection 'lo' (1bb1a588-fc76-48e8-baa9-019c5d49bc8e)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6925] device (eth0): carrier: link connected
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6933] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6943] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6944] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6956] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6969] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6986] device (eth1): carrier: link connected
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.6993] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7001] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ac9b6745-2bc2-5ba9-b147-889934bdce51) (indicated)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7002] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7007] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7016] device (eth1): Activation: starting connection 'ci-private-network' (ac9b6745-2bc2-5ba9-b147-889934bdce51)
Dec 09 15:58:42 compute-0 systemd[1]: Started Network Manager.
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7024] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7044] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7046] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7048] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7051] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7054] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7056] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7059] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7061] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7066] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7068] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7075] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7102] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7113] dhcp4 (eth0): state changed new lease, address=38.102.83.184
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7123] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7192] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7200] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7208] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7214] device (lo): Activation: successful, device activated.
Dec 09 15:58:42 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7224] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7228] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7231] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7233] device (eth1): Activation: successful, device activated.
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7239] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7241] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7244] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7247] device (eth0): Activation: successful, device activated.
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7254] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 15:58:42 compute-0 NetworkManager[49021]: <info>  [1765295922.7258] manager: startup complete
Dec 09 15:58:42 compute-0 sudo[49008]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:42 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 09 15:58:43 compute-0 sudo[49234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvvhzxqjnmfbprcxecusmglnxwgghfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295922.959136-168-116634956010852/AnsiballZ_dnf.py'
Dec 09 15:58:43 compute-0 sudo[49234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:43 compute-0 python3.9[49236]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:58:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 15:58:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 15:58:48 compute-0 systemd[1]: Reloading.
Dec 09 15:58:48 compute-0 systemd-rc-local-generator[49289]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 15:58:48 compute-0 systemd-sysv-generator[49292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 15:58:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 15:58:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 15:58:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 15:58:49 compute-0 systemd[1]: run-rf0cba70c304a44dc9996c3b320903005.service: Deactivated successfully.
Dec 09 15:58:49 compute-0 sudo[49234]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:50 compute-0 sudo[49693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plcxtmvskynrktqnpgvcmxhinwqeerst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295929.70525-180-66394262071290/AnsiballZ_stat.py'
Dec 09 15:58:50 compute-0 sudo[49693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:50 compute-0 python3.9[49695]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 15:58:50 compute-0 sudo[49693]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:50 compute-0 sudo[49845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnzarfvzdpsuvjmfhjarbjvpincsxyfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295930.457474-189-107325388975407/AnsiballZ_ini_file.py'
Dec 09 15:58:50 compute-0 sudo[49845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:51 compute-0 python3.9[49847]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:58:51 compute-0 sudo[49845]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:51 compute-0 sudo[49999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hklyvvyjywfbovyvwatioihhxmpmpwfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295931.4290526-199-120245463304330/AnsiballZ_ini_file.py'
Dec 09 15:58:51 compute-0 sudo[49999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:51 compute-0 python3.9[50001]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:58:51 compute-0 sudo[49999]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:52 compute-0 sudo[50151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjnxaicqbrhblrlxaqbzmgcwbmpkombc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295932.0841725-199-184680110712723/AnsiballZ_ini_file.py'
Dec 09 15:58:52 compute-0 sudo[50151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:52 compute-0 python3.9[50153]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:58:52 compute-0 sudo[50151]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:52 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 15:58:53 compute-0 sudo[50303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atlqatjvgkobnjrfgltvhheznviusodz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295932.7930608-214-136718400816522/AnsiballZ_ini_file.py'
Dec 09 15:58:53 compute-0 sudo[50303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:53 compute-0 python3.9[50305]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:58:53 compute-0 sudo[50303]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:53 compute-0 sudo[50455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctcdcengxagkrotpcbdwvddewmckcksh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295933.3858128-214-176887951996833/AnsiballZ_ini_file.py'
Dec 09 15:58:53 compute-0 sudo[50455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:53 compute-0 python3.9[50457]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:58:53 compute-0 sudo[50455]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:54 compute-0 sudo[50607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahmwyhuxzofnynozmmqoditbhmxjsiis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295934.036506-229-212524435658555/AnsiballZ_stat.py'
Dec 09 15:58:54 compute-0 sudo[50607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:54 compute-0 python3.9[50609]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:58:54 compute-0 sudo[50607]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:55 compute-0 sudo[50730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxhmwfhjbhxautlizvgkdpzlpzbrrhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295934.036506-229-212524435658555/AnsiballZ_copy.py'
Dec 09 15:58:55 compute-0 sudo[50730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:55 compute-0 python3.9[50732]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765295934.036506-229-212524435658555/.source _original_basename=.e8od0y0v follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:58:55 compute-0 sudo[50730]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:55 compute-0 sudo[50882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unvkgytkcyfxjqtprluiaweerkjmigyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295935.4262576-244-174262160120550/AnsiballZ_file.py'
Dec 09 15:58:55 compute-0 sudo[50882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:55 compute-0 python3.9[50884]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:58:55 compute-0 sudo[50882]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:56 compute-0 sudo[51034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grcdoyxxyrgaxvhjouyrceqyzcluuqdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295936.0819402-252-167971687475027/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 09 15:58:56 compute-0 sudo[51034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:56 compute-0 python3.9[51036]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 09 15:58:56 compute-0 sudo[51034]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:57 compute-0 sudo[51186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vadbeqohyhjtyqdcavbdxryqvamzagrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295936.988702-261-247401073675185/AnsiballZ_file.py'
Dec 09 15:58:57 compute-0 sudo[51186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:57 compute-0 python3.9[51188]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:58:57 compute-0 sudo[51186]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:58 compute-0 sudo[51338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esaklilxprfcdnpbnyacvqhekkihjivx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295937.8224504-271-32278182845205/AnsiballZ_stat.py'
Dec 09 15:58:58 compute-0 sudo[51338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:58 compute-0 sudo[51338]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:58 compute-0 sudo[51461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdbttnlcfyrizkvrrwgazuseagpwphyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295937.8224504-271-32278182845205/AnsiballZ_copy.py'
Dec 09 15:58:58 compute-0 sudo[51461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:59 compute-0 sudo[51461]: pam_unix(sudo:session): session closed for user root
Dec 09 15:58:59 compute-0 sudo[51613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faoekklihobwztdcugeeclyjfcsssdvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295939.247488-286-233180455765316/AnsiballZ_slurp.py'
Dec 09 15:58:59 compute-0 sudo[51613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:58:59 compute-0 python3.9[51615]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 09 15:58:59 compute-0 sudo[51613]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:01 compute-0 sudo[51791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-melurvupmpssrepstwkmtgxbqqcobvpn ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295940.9062674-295-173163937314251/async_wrapper.py j179229000749 300 /home/zuul/.ansible/tmp/ansible-tmp-1765295940.9062674-295-173163937314251/AnsiballZ_edpm_os_net_config.py _'
Dec 09 15:59:01 compute-0 sudo[51791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:01 compute-0 sshd-session[51716]: Invalid user admin from 146.190.31.45 port 50226
Dec 09 15:59:01 compute-0 ansible-async_wrapper.py[51793]: Invoked with j179229000749 300 /home/zuul/.ansible/tmp/ansible-tmp-1765295940.9062674-295-173163937314251/AnsiballZ_edpm_os_net_config.py _
Dec 09 15:59:01 compute-0 ansible-async_wrapper.py[51796]: Starting module and watcher
Dec 09 15:59:01 compute-0 ansible-async_wrapper.py[51796]: Start watching 51797 (300)
Dec 09 15:59:01 compute-0 ansible-async_wrapper.py[51797]: Start module (51797)
Dec 09 15:59:01 compute-0 ansible-async_wrapper.py[51793]: Return async_wrapper task started.
Dec 09 15:59:01 compute-0 sudo[51791]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:01 compute-0 sshd-session[51716]: Connection closed by invalid user admin 146.190.31.45 port 50226 [preauth]
Dec 09 15:59:02 compute-0 python3.9[51798]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 09 15:59:02 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 09 15:59:02 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 09 15:59:02 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 09 15:59:02 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 09 15:59:02 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.0318] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.0340] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1100] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1103] audit: op="connection-add" uuid="9290bba1-31fd-4607-80fa-119217225ed2" name="br-ex-br" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1129] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1132] audit: op="connection-add" uuid="bede1ea4-c65d-4115-8844-b58f528e2b4c" name="br-ex-port" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1154] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1157] audit: op="connection-add" uuid="46826f0c-d823-4d26-bb0f-28e3ca75cdc1" name="eth1-port" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1181] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1184] audit: op="connection-add" uuid="d55f7838-4c89-41d4-9974-96d1b3e64f27" name="vlan20-port" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1213] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1216] audit: op="connection-add" uuid="469cef5b-5109-4296-9583-795f2cf8ead3" name="vlan21-port" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1241] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1244] audit: op="connection-add" uuid="6ad93456-ca21-489a-aa06-4e89846ed1fa" name="vlan22-port" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1262] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1264] audit: op="connection-add" uuid="b3a560fd-9f4a-4a7b-aac8-f0c01527d3a9" name="vlan23-port" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1299] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1330] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1332] audit: op="connection-add" uuid="5e32d99a-09b3-47bf-89ed-0241b39252aa" name="br-ex-if" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1391] audit: op="connection-update" uuid="ac9b6745-2bc2-5ba9-b147-889934bdce51" name="ci-private-network" args="connection.master,connection.slave-type,connection.port-type,connection.controller,connection.timestamp,ipv4.never-default,ipv4.addresses,ipv4.dns,ipv4.routes,ipv4.method,ipv4.routing-rules,ipv6.addresses,ipv6.addr-gen-mode,ipv6.routes,ipv6.dns,ipv6.method,ipv6.routing-rules,ovs-interface.type,ovs-external-ids.data" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1415] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1417] audit: op="connection-add" uuid="38ae0966-df52-41e4-b53c-56fdda3979d4" name="vlan20-if" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1441] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1444] audit: op="connection-add" uuid="bd98c78e-66b0-4da6-aa86-188802a86772" name="vlan21-if" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1466] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1469] audit: op="connection-add" uuid="5297f0e4-21ec-40fc-b4df-1049cc69160b" name="vlan22-if" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1496] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1499] audit: op="connection-add" uuid="69a47893-d38a-49d3-a528-4bf642376da6" name="vlan23-if" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1514] audit: op="connection-delete" uuid="9e208782-6966-35e9-a939-9a846f39c3da" name="Wired connection 1" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1533] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1538] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1550] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1555] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (9290bba1-31fd-4607-80fa-119217225ed2)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1556] audit: op="connection-activate" uuid="9290bba1-31fd-4607-80fa-119217225ed2" name="br-ex-br" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1559] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1560] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1568] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1572] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (bede1ea4-c65d-4115-8844-b58f528e2b4c)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1574] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1575] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1580] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1583] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (46826f0c-d823-4d26-bb0f-28e3ca75cdc1)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1587] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1588] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1593] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1597] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (d55f7838-4c89-41d4-9974-96d1b3e64f27)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1599] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1600] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1605] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1610] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (469cef5b-5109-4296-9583-795f2cf8ead3)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1611] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1612] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1616] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1620] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (6ad93456-ca21-489a-aa06-4e89846ed1fa)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1621] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1622] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1625] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1633] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (b3a560fd-9f4a-4a7b-aac8-f0c01527d3a9)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1634] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1636] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1638] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1643] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1644] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1646] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1650] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (5e32d99a-09b3-47bf-89ed-0241b39252aa)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1650] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1652] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1653] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1654] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1655] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1664] device (eth1): disconnecting for new activation request.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1664] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1666] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1667] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1668] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1670] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1670] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1672] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1675] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (38ae0966-df52-41e4-b53c-56fdda3979d4)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1675] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1677] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1679] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1679] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1681] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1682] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1684] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1686] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (bd98c78e-66b0-4da6-aa86-188802a86772)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1687] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1689] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1690] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1691] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1693] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1693] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1695] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1699] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (5297f0e4-21ec-40fc-b4df-1049cc69160b)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1699] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1701] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1702] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1703] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1705] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <warn>  [1765295944.1706] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1708] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1710] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (69a47893-d38a-49d3-a528-4bf642376da6)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1711] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1713] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1714] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1715] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1718] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1729] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1731] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1735] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1736] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1741] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1743] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1746] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1749] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1750] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1754] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1757] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1759] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1760] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1764] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1767] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1769] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1770] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1775] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1778] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1780] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1781] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1785] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1788] dhcp4 (eth0): canceled DHCP transaction
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1789] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1789] dhcp4 (eth0): state changed no lease
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1790] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 09 15:59:04 compute-0 kernel: Timeout policy base is empty
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1802] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1805] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51799 uid=0 result="fail" reason="Device is not activated"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1810] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 09 15:59:04 compute-0 systemd-udevd[51803]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1842] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1848] dhcp4 (eth0): state changed new lease, address=38.102.83.184
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1855] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 09 15:59:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1904] device (eth1): disconnecting for new activation request.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1905] audit: op="connection-activate" uuid="ac9b6745-2bc2-5ba9-b147-889934bdce51" name="ci-private-network" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1908] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.1960] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51799 uid=0 result="success"
Dec 09 15:59:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2018] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2032] device (eth1): Activation: starting connection 'ci-private-network' (ac9b6745-2bc2-5ba9-b147-889934bdce51)
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2035] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2038] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2041] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2045] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2049] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2053] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2093] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2100] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2102] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2106] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2109] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2110] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2112] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2118] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2124] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2129] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2133] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2138] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2142] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2146] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2151] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2155] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2160] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2165] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2169] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2174] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 kernel: br-ex: entered promiscuous mode
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2215] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2225] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2239] device (eth1): Activation: successful, device activated.
Dec 09 15:59:04 compute-0 kernel: vlan22: entered promiscuous mode
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2324] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 09 15:59:04 compute-0 systemd-udevd[51805]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2360] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 kernel: vlan23: entered promiscuous mode
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2393] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2396] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2404] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 kernel: vlan21: entered promiscuous mode
Dec 09 15:59:04 compute-0 systemd-udevd[51804]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2498] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2521] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2536] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 09 15:59:04 compute-0 kernel: vlan20: entered promiscuous mode
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2556] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2580] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2599] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2609] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2621] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2627] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2640] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2667] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2698] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2705] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2746] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2755] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2763] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2773] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2783] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2789] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 15:59:04 compute-0 NetworkManager[49021]: <info>  [1765295944.2798] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 15:59:05 compute-0 NetworkManager[49021]: <info>  [1765295945.3961] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51799 uid=0 result="success"
Dec 09 15:59:05 compute-0 sudo[52154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lalzfcnayzwpmdxwjuwacpwrlfqmwxvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295945.0028334-295-211358055483178/AnsiballZ_async_status.py'
Dec 09 15:59:05 compute-0 sudo[52154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:05 compute-0 NetworkManager[49021]: <info>  [1765295945.5870] checkpoint[0x55a85b431950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 09 15:59:05 compute-0 NetworkManager[49021]: <info>  [1765295945.5872] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51799 uid=0 result="success"
Dec 09 15:59:05 compute-0 python3.9[52156]: ansible-ansible.legacy.async_status Invoked with jid=j179229000749.51793 mode=status _async_dir=/root/.ansible_async
Dec 09 15:59:05 compute-0 sudo[52154]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:05 compute-0 NetworkManager[49021]: <info>  [1765295945.9282] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51799 uid=0 result="success"
Dec 09 15:59:05 compute-0 NetworkManager[49021]: <info>  [1765295945.9305] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51799 uid=0 result="success"
Dec 09 15:59:06 compute-0 NetworkManager[49021]: <info>  [1765295946.2252] audit: op="networking-control" arg="global-dns-configuration" pid=51799 uid=0 result="success"
Dec 09 15:59:06 compute-0 NetworkManager[49021]: <info>  [1765295946.2294] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 09 15:59:06 compute-0 NetworkManager[49021]: <info>  [1765295946.2342] audit: op="networking-control" arg="global-dns-configuration" pid=51799 uid=0 result="success"
Dec 09 15:59:06 compute-0 NetworkManager[49021]: <info>  [1765295946.2366] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51799 uid=0 result="success"
Dec 09 15:59:06 compute-0 NetworkManager[49021]: <info>  [1765295946.4025] checkpoint[0x55a85b431a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 09 15:59:06 compute-0 NetworkManager[49021]: <info>  [1765295946.4038] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51799 uid=0 result="success"
Dec 09 15:59:06 compute-0 ansible-async_wrapper.py[51797]: Module complete (51797)
Dec 09 15:59:06 compute-0 ansible-async_wrapper.py[51796]: Done in kid B.
Dec 09 15:59:08 compute-0 sudo[52262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytdrxwymndlwjtqdsififvamsjhpbmom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295945.0028334-295-211358055483178/AnsiballZ_async_status.py'
Dec 09 15:59:08 compute-0 sudo[52262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:09 compute-0 python3.9[52264]: ansible-ansible.legacy.async_status Invoked with jid=j179229000749.51793 mode=status _async_dir=/root/.ansible_async
Dec 09 15:59:09 compute-0 sudo[52262]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:09 compute-0 sudo[52362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxisfzazlxyiesrjiltcgzmqkctnoslb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295945.0028334-295-211358055483178/AnsiballZ_async_status.py'
Dec 09 15:59:09 compute-0 sudo[52362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:09 compute-0 python3.9[52364]: ansible-ansible.legacy.async_status Invoked with jid=j179229000749.51793 mode=cleanup _async_dir=/root/.ansible_async
Dec 09 15:59:09 compute-0 sudo[52362]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:10 compute-0 sudo[52514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xejavfqmmytlhugdrarjcmtmzrxedybm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295949.8819382-322-142762736480412/AnsiballZ_stat.py'
Dec 09 15:59:10 compute-0 sudo[52514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:10 compute-0 python3.9[52516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:59:10 compute-0 sudo[52514]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:10 compute-0 sudo[52637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvwcvlzuukqlnuiyyfqdegwqxshslfvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295949.8819382-322-142762736480412/AnsiballZ_copy.py'
Dec 09 15:59:10 compute-0 sudo[52637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:10 compute-0 python3.9[52639]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765295949.8819382-322-142762736480412/.source.returncode _original_basename=.dnos06p_ follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:59:11 compute-0 sudo[52637]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:11 compute-0 sudo[52789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhvqcocudtmayasjfqekqzpmeknxnznv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295951.2341466-338-94431200739407/AnsiballZ_stat.py'
Dec 09 15:59:11 compute-0 sudo[52789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:11 compute-0 sshd-session[52164]: Received disconnect from 45.78.206.111 port 59006:11: Bye Bye [preauth]
Dec 09 15:59:11 compute-0 sshd-session[52164]: Disconnected from authenticating user root 45.78.206.111 port 59006 [preauth]
Dec 09 15:59:11 compute-0 python3.9[52791]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:59:11 compute-0 sudo[52789]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:12 compute-0 sudo[52912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sempawxzrjecnevcwoayuynmpspdqeoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295951.2341466-338-94431200739407/AnsiballZ_copy.py'
Dec 09 15:59:12 compute-0 sudo[52912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:12 compute-0 python3.9[52914]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765295951.2341466-338-94431200739407/.source.cfg _original_basename=.gizalup6 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:59:12 compute-0 sudo[52912]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:12 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 15:59:12 compute-0 sudo[53068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhkpyzxreocoqqrlscxtsvsomtzihrgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295952.5090961-353-61649372058655/AnsiballZ_systemd.py'
Dec 09 15:59:12 compute-0 sudo[53068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:13 compute-0 python3.9[53070]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 15:59:13 compute-0 systemd[1]: Reloading Network Manager...
Dec 09 15:59:13 compute-0 NetworkManager[49021]: <info>  [1765295953.1538] audit: op="reload" arg="0" pid=53074 uid=0 result="success"
Dec 09 15:59:13 compute-0 NetworkManager[49021]: <info>  [1765295953.1544] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 09 15:59:13 compute-0 systemd[1]: Reloaded Network Manager.
Dec 09 15:59:13 compute-0 sudo[53068]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:13 compute-0 sshd-session[45022]: Connection closed by 192.168.122.30 port 37748
Dec 09 15:59:13 compute-0 sshd-session[45019]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:59:13 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 09 15:59:13 compute-0 systemd[1]: session-10.scope: Consumed 50.822s CPU time.
Dec 09 15:59:13 compute-0 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Dec 09 15:59:13 compute-0 systemd-logind[786]: Removed session 10.
Dec 09 15:59:18 compute-0 sshd-session[53104]: Accepted publickey for zuul from 192.168.122.30 port 60542 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 15:59:18 compute-0 systemd-logind[786]: New session 11 of user zuul.
Dec 09 15:59:18 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 09 15:59:18 compute-0 sshd-session[53104]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:59:19 compute-0 python3.9[53258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:59:20 compute-0 python3.9[53412]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 15:59:21 compute-0 python3.9[53605]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:59:22 compute-0 sshd-session[53107]: Connection closed by 192.168.122.30 port 60542
Dec 09 15:59:22 compute-0 sshd-session[53104]: pam_unix(sshd:session): session closed for user zuul
Dec 09 15:59:22 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 09 15:59:22 compute-0 systemd[1]: session-11.scope: Consumed 2.164s CPU time.
Dec 09 15:59:22 compute-0 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Dec 09 15:59:22 compute-0 systemd-logind[786]: Removed session 11.
Dec 09 15:59:23 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 15:59:27 compute-0 sshd-session[53635]: Accepted publickey for zuul from 192.168.122.30 port 51646 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 15:59:27 compute-0 systemd-logind[786]: New session 12 of user zuul.
Dec 09 15:59:27 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 09 15:59:27 compute-0 sshd-session[53635]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 15:59:28 compute-0 python3.9[53789]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:59:29 compute-0 python3.9[53943]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:59:30 compute-0 sudo[54097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iosbtqvmkncgoytdvrgequpfkdrvbshh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295970.2190468-40-71687740728917/AnsiballZ_setup.py'
Dec 09 15:59:30 compute-0 sudo[54097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:30 compute-0 python3.9[54099]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 15:59:31 compute-0 sudo[54097]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:31 compute-0 sudo[54181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdangmnolkxdmjbdlfasxblllifvkqvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295970.2190468-40-71687740728917/AnsiballZ_dnf.py'
Dec 09 15:59:31 compute-0 sudo[54181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:31 compute-0 python3.9[54183]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:59:32 compute-0 sudo[54181]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:33 compute-0 sudo[54335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifzstnjsklifngzfszbypquoeytxlpbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295973.1481364-52-95326280059115/AnsiballZ_setup.py'
Dec 09 15:59:33 compute-0 sudo[54335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:33 compute-0 python3.9[54337]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 15:59:33 compute-0 sudo[54335]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:34 compute-0 sudo[54530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxqzoraqsnmdifjqxklvlznznpboexiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295974.180294-63-277474760394470/AnsiballZ_file.py'
Dec 09 15:59:34 compute-0 sudo[54530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:34 compute-0 python3.9[54532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:59:34 compute-0 sudo[54530]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:35 compute-0 sudo[54683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-royaixvzkbkgfvoxgbgpmgfuylsybuud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295974.9519355-71-53525869988238/AnsiballZ_command.py'
Dec 09 15:59:35 compute-0 sudo[54683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:35 compute-0 python3.9[54685]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:59:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1236431735-merged.mount: Deactivated successfully.
Dec 09 15:59:35 compute-0 podman[54686]: 2025-12-09 15:59:35.688129535 +0000 UTC m=+0.082389688 system refresh
Dec 09 15:59:35 compute-0 sudo[54683]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:36 compute-0 sudo[54846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujuyjnredsefjuqhywvhbdgffyvdtszv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295975.885496-79-40679354939450/AnsiballZ_stat.py'
Dec 09 15:59:36 compute-0 sudo[54846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:36 compute-0 python3.9[54848]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:59:36 compute-0 sudo[54846]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 15:59:37 compute-0 sudo[54969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aycgeaqgzjxdpvzzhrcudxohefxalqwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295975.885496-79-40679354939450/AnsiballZ_copy.py'
Dec 09 15:59:37 compute-0 sudo[54969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:37 compute-0 python3.9[54971]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765295975.885496-79-40679354939450/.source.json follow=False _original_basename=podman_network_config.j2 checksum=01310e0f5f7be538a16f9dde7bf4f81af52ba500 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:59:37 compute-0 sudo[54969]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:37 compute-0 sudo[55121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjwgvagtioctnemsuhyodqwmqevphrmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295977.408418-94-179673394594624/AnsiballZ_stat.py'
Dec 09 15:59:37 compute-0 sudo[55121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:37 compute-0 python3.9[55123]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:59:37 compute-0 sudo[55121]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:38 compute-0 sudo[55244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmqdircxazyjomjyxqacgmtsbtrvtchx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295977.408418-94-179673394594624/AnsiballZ_copy.py'
Dec 09 15:59:38 compute-0 sudo[55244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:38 compute-0 python3.9[55246]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765295977.408418-94-179673394594624/.source.conf follow=False _original_basename=registries.conf.j2 checksum=995fd093794df4c7d9de3e398215433bce0b1dca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:59:38 compute-0 sudo[55244]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:39 compute-0 sudo[55396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydhvipwnuihnjztwpeccwnfnquguvfii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295978.6559498-110-123074619368327/AnsiballZ_ini_file.py'
Dec 09 15:59:39 compute-0 sudo[55396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:39 compute-0 python3.9[55398]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:59:39 compute-0 sudo[55396]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:39 compute-0 sudo[55548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gppcfyajzirvsbfshwyabhjnbwaflmcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295979.53942-110-239534998105735/AnsiballZ_ini_file.py'
Dec 09 15:59:39 compute-0 sudo[55548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:39 compute-0 python3.9[55550]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:59:39 compute-0 sudo[55548]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:40 compute-0 sudo[55700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxwgyysgvjvaaxltljgdvvhsxpbktltj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295980.1382425-110-181521109714588/AnsiballZ_ini_file.py'
Dec 09 15:59:40 compute-0 sudo[55700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:40 compute-0 python3.9[55702]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:59:40 compute-0 sudo[55700]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:41 compute-0 sudo[55852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiddkdefcbgpyunsbwooltwhkjffnfzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295980.7968373-110-229005255736315/AnsiballZ_ini_file.py'
Dec 09 15:59:41 compute-0 sudo[55852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:41 compute-0 python3.9[55854]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 15:59:41 compute-0 sudo[55852]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:41 compute-0 sudo[56004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljysdwqnvroicjkivnbuaqwxedpxtfcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295981.4977136-141-44389596032494/AnsiballZ_dnf.py'
Dec 09 15:59:41 compute-0 sudo[56004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:42 compute-0 python3.9[56006]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:59:43 compute-0 sudo[56004]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:44 compute-0 sudo[56159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypgqmsdhmxsberzhdvkiqcyajsxqrgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295983.8067765-152-155947062203557/AnsiballZ_setup.py'
Dec 09 15:59:44 compute-0 sudo[56159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:44 compute-0 sshd-session[56032]: Invalid user admin from 146.190.31.45 port 44502
Dec 09 15:59:44 compute-0 sshd-session[56032]: Connection closed by invalid user admin 146.190.31.45 port 44502 [preauth]
Dec 09 15:59:44 compute-0 python3.9[56161]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 15:59:44 compute-0 sudo[56159]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:44 compute-0 sudo[56313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smffahluqyklznlpwbbjvsnaygryeaab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295984.6331284-160-28830436353524/AnsiballZ_stat.py'
Dec 09 15:59:44 compute-0 sudo[56313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:45 compute-0 python3.9[56315]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 15:59:45 compute-0 sudo[56313]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:45 compute-0 sudo[56465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edhpoowxowhxukkldhuxtneayvfaawju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295985.2693036-169-276251873803934/AnsiballZ_stat.py'
Dec 09 15:59:45 compute-0 sudo[56465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:45 compute-0 python3.9[56467]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 15:59:45 compute-0 sudo[56465]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:46 compute-0 sudo[56617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npekjcqjxcvfubuggxvqdhbavehvlzmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295986.0783956-179-54567389435682/AnsiballZ_command.py'
Dec 09 15:59:46 compute-0 sudo[56617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:46 compute-0 python3.9[56619]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 15:59:46 compute-0 sudo[56617]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:47 compute-0 sudo[56770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdorbzltbibxuvzuxjgzykwbqtnyxdrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295986.9007707-189-25086344735548/AnsiballZ_service_facts.py'
Dec 09 15:59:47 compute-0 sudo[56770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:47 compute-0 python3.9[56772]: ansible-service_facts Invoked
Dec 09 15:59:47 compute-0 network[56789]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 15:59:47 compute-0 network[56790]: 'network-scripts' will be removed from distribution in near future.
Dec 09 15:59:47 compute-0 network[56791]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 15:59:50 compute-0 sudo[56770]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:51 compute-0 sudo[57074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovnhsyubzewxdkshzucbximaztsxcdme ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765295990.7988822-204-26888315313263/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765295990.7988822-204-26888315313263/args'
Dec 09 15:59:51 compute-0 sudo[57074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:51 compute-0 sudo[57074]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:51 compute-0 sudo[57241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmllqdwjvsrpmflxcpqqhoschceexzxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295991.4639435-215-100005002047169/AnsiballZ_dnf.py'
Dec 09 15:59:51 compute-0 sudo[57241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:51 compute-0 python3.9[57243]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 15:59:53 compute-0 sudo[57241]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:54 compute-0 sudo[57394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbyiqmnbivantcgumkoqpksoujezdopq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295993.7812347-228-58194324048730/AnsiballZ_package_facts.py'
Dec 09 15:59:54 compute-0 sudo[57394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:54 compute-0 python3.9[57396]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 09 15:59:54 compute-0 sudo[57394]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:55 compute-0 sudo[57546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppirfisiamjgyfkfoetqrrzggykoytex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295995.2724786-238-121922651090263/AnsiballZ_stat.py'
Dec 09 15:59:55 compute-0 sudo[57546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:55 compute-0 python3.9[57548]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:59:55 compute-0 sudo[57546]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:56 compute-0 sudo[57671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snmybfkemhuwlhpwacygjwqngtrzsvuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295995.2724786-238-121922651090263/AnsiballZ_copy.py'
Dec 09 15:59:56 compute-0 sudo[57671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:56 compute-0 python3.9[57673]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765295995.2724786-238-121922651090263/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:59:56 compute-0 sudo[57671]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:56 compute-0 sudo[57825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hclvbeuwfprkyjipinpxivwiavvdqozr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295996.6699739-253-248311046343002/AnsiballZ_stat.py'
Dec 09 15:59:56 compute-0 sudo[57825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:57 compute-0 python3.9[57827]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 15:59:57 compute-0 sudo[57825]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:57 compute-0 sudo[57950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmtdpnwvlbhhvxkfjqeaffhjqstpyvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295996.6699739-253-248311046343002/AnsiballZ_copy.py'
Dec 09 15:59:57 compute-0 sudo[57950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:57 compute-0 python3.9[57952]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765295996.6699739-253-248311046343002/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:59:57 compute-0 sudo[57950]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:58 compute-0 sudo[58104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odaoqhrfkxpfxswyvvsnybyhydvrzykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295998.1933765-274-48634827218577/AnsiballZ_lineinfile.py'
Dec 09 15:59:58 compute-0 sudo[58104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 15:59:58 compute-0 python3.9[58106]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 15:59:58 compute-0 sudo[58104]: pam_unix(sudo:session): session closed for user root
Dec 09 15:59:59 compute-0 sudo[58258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipgcfouteioofhlljgtlelpxxksmhpgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295999.4046052-289-167772537278558/AnsiballZ_setup.py'
Dec 09 15:59:59 compute-0 sudo[58258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:00 compute-0 python3.9[58260]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:00:00 compute-0 sudo[58258]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:00 compute-0 sudo[58342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssocpuqrxhlvlmnlmmwntcfzehneqsxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765295999.4046052-289-167772537278558/AnsiballZ_systemd.py'
Dec 09 16:00:00 compute-0 sudo[58342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:01 compute-0 python3.9[58344]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:00:01 compute-0 sudo[58342]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:01 compute-0 anacron[4310]: Job `cron.weekly' started
Dec 09 16:00:01 compute-0 anacron[4310]: Job `cron.weekly' terminated
Dec 09 16:00:01 compute-0 sudo[58498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxevewaqqlgmdyloaulihvpbeimmncav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296001.65529-305-12736893162204/AnsiballZ_setup.py'
Dec 09 16:00:01 compute-0 sudo[58498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:02 compute-0 python3.9[58500]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:00:02 compute-0 sudo[58498]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:02 compute-0 sudo[58582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgxpafinxckmbyaohwkyjvhgvltatncz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296001.65529-305-12736893162204/AnsiballZ_systemd.py'
Dec 09 16:00:02 compute-0 sudo[58582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:03 compute-0 python3.9[58584]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:00:03 compute-0 chronyd[793]: chronyd exiting
Dec 09 16:00:03 compute-0 systemd[1]: Stopping NTP client/server...
Dec 09 16:00:03 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 09 16:00:03 compute-0 systemd[1]: Stopped NTP client/server.
Dec 09 16:00:03 compute-0 systemd[1]: Starting NTP client/server...
Dec 09 16:00:03 compute-0 chronyd[58592]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 09 16:00:03 compute-0 chronyd[58592]: Frequency -28.511 +/- 0.176 ppm read from /var/lib/chrony/drift
Dec 09 16:00:03 compute-0 chronyd[58592]: Loaded seccomp filter (level 2)
Dec 09 16:00:03 compute-0 systemd[1]: Started NTP client/server.
Dec 09 16:00:03 compute-0 sudo[58582]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:03 compute-0 sshd-session[53638]: Connection closed by 192.168.122.30 port 51646
Dec 09 16:00:03 compute-0 sshd-session[53635]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:00:03 compute-0 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Dec 09 16:00:03 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 09 16:00:03 compute-0 systemd[1]: session-12.scope: Consumed 24.791s CPU time.
Dec 09 16:00:03 compute-0 systemd-logind[786]: Removed session 12.
Dec 09 16:00:09 compute-0 sshd-session[58618]: Accepted publickey for zuul from 192.168.122.30 port 58826 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:00:09 compute-0 systemd-logind[786]: New session 13 of user zuul.
Dec 09 16:00:09 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 09 16:00:09 compute-0 sshd-session[58618]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:00:09 compute-0 sudo[58771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkbsytaaiucvwdeohuebulbxjzsbzlkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296009.215784-22-149468644008662/AnsiballZ_file.py'
Dec 09 16:00:09 compute-0 sudo[58771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:09 compute-0 python3.9[58773]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:09 compute-0 sudo[58771]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:10 compute-0 sudo[58923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfgopesynzhslwqrajmnaxbfywssvenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296010.0086124-34-129158462887633/AnsiballZ_stat.py'
Dec 09 16:00:10 compute-0 sudo[58923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:10 compute-0 python3.9[58925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:10 compute-0 sudo[58923]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:11 compute-0 sudo[59046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulytryrqdhzqrwmgzuqzzoqhgxjyckza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296010.0086124-34-129158462887633/AnsiballZ_copy.py'
Dec 09 16:00:11 compute-0 sudo[59046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:11 compute-0 python3.9[59048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296010.0086124-34-129158462887633/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:11 compute-0 sudo[59046]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:11 compute-0 sshd-session[58621]: Connection closed by 192.168.122.30 port 58826
Dec 09 16:00:11 compute-0 sshd-session[58618]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:00:11 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 09 16:00:11 compute-0 systemd[1]: session-13.scope: Consumed 1.663s CPU time.
Dec 09 16:00:11 compute-0 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Dec 09 16:00:11 compute-0 systemd-logind[786]: Removed session 13.
Dec 09 16:00:17 compute-0 sshd-session[59073]: Accepted publickey for zuul from 192.168.122.30 port 41650 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:00:17 compute-0 systemd-logind[786]: New session 14 of user zuul.
Dec 09 16:00:17 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 09 16:00:17 compute-0 sshd-session[59073]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:00:19 compute-0 python3.9[59226]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:00:20 compute-0 sudo[59380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buppjkvotwfjfwkmowhksbtfyjccsius ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296019.5200365-33-14750860729040/AnsiballZ_file.py'
Dec 09 16:00:20 compute-0 sudo[59380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:20 compute-0 python3.9[59382]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:20 compute-0 sudo[59380]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:20 compute-0 sudo[59555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvfkufmnpjuhxmzljzrtxddfdhzhkscy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296020.3762593-41-234106006176380/AnsiballZ_stat.py'
Dec 09 16:00:20 compute-0 sudo[59555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:21 compute-0 python3.9[59557]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:21 compute-0 sudo[59555]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:21 compute-0 sudo[59678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfvnkygitryjayjofayqyygtxblbzpnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296020.3762593-41-234106006176380/AnsiballZ_copy.py'
Dec 09 16:00:21 compute-0 sudo[59678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:21 compute-0 python3.9[59680]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765296020.3762593-41-234106006176380/.source.json _original_basename=.l84ciw7s follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:21 compute-0 sudo[59678]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:22 compute-0 sudo[59830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icspvgnwqwmumgrxdbhpiderxlbofwyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296022.1172667-64-230729900444927/AnsiballZ_stat.py'
Dec 09 16:00:22 compute-0 sudo[59830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:22 compute-0 python3.9[59832]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:22 compute-0 sudo[59830]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:22 compute-0 sudo[59953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffhkrpefmfgxxrxmitqfnmdnpgimple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296022.1172667-64-230729900444927/AnsiballZ_copy.py'
Dec 09 16:00:22 compute-0 sudo[59953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:23 compute-0 python3.9[59955]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296022.1172667-64-230729900444927/.source _original_basename=.wnpk7ief follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:23 compute-0 sudo[59953]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:23 compute-0 sudo[60105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwqvfkehfuoowhnkqfyjkvrqwytupqps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296023.2722118-80-183301344463518/AnsiballZ_file.py'
Dec 09 16:00:23 compute-0 sudo[60105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:23 compute-0 python3.9[60107]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:00:23 compute-0 sudo[60105]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:24 compute-0 sudo[60257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzhabnefurvrdnaaotyasovnsrzjuttq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296023.8941808-88-147085093121386/AnsiballZ_stat.py'
Dec 09 16:00:24 compute-0 sudo[60257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:24 compute-0 python3.9[60259]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:24 compute-0 sudo[60257]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:24 compute-0 sudo[60380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvywtjtugargzqywvqzupbctwoztybwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296023.8941808-88-147085093121386/AnsiballZ_copy.py'
Dec 09 16:00:24 compute-0 sudo[60380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:24 compute-0 python3.9[60382]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296023.8941808-88-147085093121386/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:00:24 compute-0 sudo[60380]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:25 compute-0 sudo[60532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njtggmjmrtimokdjnzezkigpqlblpako ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296025.125704-88-46511309966283/AnsiballZ_stat.py'
Dec 09 16:00:25 compute-0 sudo[60532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:25 compute-0 python3.9[60534]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:25 compute-0 sudo[60532]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:25 compute-0 sudo[60655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lemoryiyakfqzuyeaazdrdquphrvmzoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296025.125704-88-46511309966283/AnsiballZ_copy.py'
Dec 09 16:00:25 compute-0 sudo[60655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:26 compute-0 python3.9[60657]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296025.125704-88-46511309966283/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:00:26 compute-0 sudo[60655]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:26 compute-0 sudo[60807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umdrwvbcsnpdqkwingghcgezqjnrsnwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296026.3885758-117-254944398756817/AnsiballZ_file.py'
Dec 09 16:00:26 compute-0 sudo[60807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:26 compute-0 python3.9[60809]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:26 compute-0 sudo[60807]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:27 compute-0 sudo[60961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfuhzvfdebxcsxqpdvdedpgxcomledpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296027.1338787-125-112187869709833/AnsiballZ_stat.py'
Dec 09 16:00:27 compute-0 sudo[60961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:27 compute-0 python3.9[60963]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:27 compute-0 sudo[60961]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:27 compute-0 sshd-session[60909]: Invalid user admin from 146.190.31.45 port 48572
Dec 09 16:00:27 compute-0 sshd-session[60909]: Connection closed by invalid user admin 146.190.31.45 port 48572 [preauth]
Dec 09 16:00:28 compute-0 sudo[61084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coroduhgneczvinrmbcqerorclsnwdqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296027.1338787-125-112187869709833/AnsiballZ_copy.py'
Dec 09 16:00:28 compute-0 sudo[61084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:28 compute-0 python3.9[61086]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296027.1338787-125-112187869709833/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:28 compute-0 sudo[61084]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:28 compute-0 sudo[61236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lseifvqsgagoxcjnhexqnkgmptebxmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296028.3997486-140-61599016377858/AnsiballZ_stat.py'
Dec 09 16:00:28 compute-0 sudo[61236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:28 compute-0 python3.9[61238]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:28 compute-0 sudo[61236]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:29 compute-0 sudo[61359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqaaprnosqzghseblpriciephekljcab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296028.3997486-140-61599016377858/AnsiballZ_copy.py'
Dec 09 16:00:29 compute-0 sudo[61359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:29 compute-0 python3.9[61361]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296028.3997486-140-61599016377858/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:29 compute-0 sudo[61359]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:30 compute-0 sudo[61511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjhfyqozgjnywfebavirwtgmtgvojhzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296029.7172973-155-205318853640817/AnsiballZ_systemd.py'
Dec 09 16:00:30 compute-0 sudo[61511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:30 compute-0 python3.9[61513]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:00:30 compute-0 systemd[1]: Reloading.
Dec 09 16:00:30 compute-0 systemd-rc-local-generator[61540]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:00:30 compute-0 systemd-sysv-generator[61544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:00:30 compute-0 systemd[1]: Reloading.
Dec 09 16:00:31 compute-0 systemd-sysv-generator[61584]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:00:31 compute-0 systemd-rc-local-generator[61580]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:00:31 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 09 16:00:31 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 09 16:00:31 compute-0 sudo[61511]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:31 compute-0 sudo[61739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzpwcdjqjtyonncsmthxmcfateolddvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296031.417186-163-244388616222816/AnsiballZ_stat.py'
Dec 09 16:00:31 compute-0 sudo[61739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:31 compute-0 python3.9[61741]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:31 compute-0 sudo[61739]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:32 compute-0 sudo[61862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbgulgimfjvrjeteuaplkkpswlgkensv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296031.417186-163-244388616222816/AnsiballZ_copy.py'
Dec 09 16:00:32 compute-0 sudo[61862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:32 compute-0 python3.9[61864]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296031.417186-163-244388616222816/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:32 compute-0 sudo[61862]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:33 compute-0 sudo[62014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnpwkpirsaynoubavbqakefmdvlqsdpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296032.8107924-178-104833333770382/AnsiballZ_stat.py'
Dec 09 16:00:33 compute-0 sudo[62014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:33 compute-0 python3.9[62016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:33 compute-0 sudo[62014]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:33 compute-0 sudo[62137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykybihrsdjwsraidotyciayodmujenep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296032.8107924-178-104833333770382/AnsiballZ_copy.py'
Dec 09 16:00:33 compute-0 sudo[62137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:33 compute-0 python3.9[62139]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296032.8107924-178-104833333770382/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:33 compute-0 sudo[62137]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:34 compute-0 sudo[62289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbcxeeuxfawluxmonbrmkbgovowozuoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296034.064899-193-136889515503055/AnsiballZ_systemd.py'
Dec 09 16:00:34 compute-0 sudo[62289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:34 compute-0 python3.9[62291]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:00:34 compute-0 systemd[1]: Reloading.
Dec 09 16:00:34 compute-0 systemd-rc-local-generator[62321]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:00:34 compute-0 systemd-sysv-generator[62324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:00:35 compute-0 systemd[1]: Reloading.
Dec 09 16:00:35 compute-0 systemd-sysv-generator[62361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:00:35 compute-0 systemd-rc-local-generator[62357]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:00:35 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 16:00:35 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 16:00:35 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 16:00:35 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 16:00:35 compute-0 sudo[62289]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:36 compute-0 python3.9[62517]: ansible-ansible.builtin.service_facts Invoked
Dec 09 16:00:36 compute-0 network[62534]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 16:00:36 compute-0 network[62535]: 'network-scripts' will be removed from distribution in near future.
Dec 09 16:00:36 compute-0 network[62536]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 16:00:39 compute-0 sudo[62796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emxnnrpuewxtlxblxwulyvfzejcbnobm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296039.4711905-209-274430256413386/AnsiballZ_systemd.py'
Dec 09 16:00:39 compute-0 sudo[62796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:40 compute-0 python3.9[62798]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:00:40 compute-0 systemd[1]: Reloading.
Dec 09 16:00:40 compute-0 systemd-sysv-generator[62827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:00:40 compute-0 systemd-rc-local-generator[62822]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:00:40 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 09 16:00:40 compute-0 iptables.init[62837]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 09 16:00:40 compute-0 iptables.init[62837]: iptables: Flushing firewall rules: [  OK  ]
Dec 09 16:00:40 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 09 16:00:40 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 09 16:00:40 compute-0 sudo[62796]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:41 compute-0 sudo[63031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byryntleqpncaeppqoqmdgmdkerclmoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296041.0264423-209-234317712101304/AnsiballZ_systemd.py'
Dec 09 16:00:41 compute-0 sudo[63031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:41 compute-0 python3.9[63033]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:00:41 compute-0 sudo[63031]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:42 compute-0 sudo[63185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uidtgeufripolqkpdpybbcivenyszijc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296042.0397036-225-183667109739390/AnsiballZ_systemd.py'
Dec 09 16:00:42 compute-0 sudo[63185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:42 compute-0 python3.9[63187]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:00:42 compute-0 systemd[1]: Reloading.
Dec 09 16:00:42 compute-0 systemd-sysv-generator[63219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:00:42 compute-0 systemd-rc-local-generator[63215]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:00:42 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 09 16:00:42 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 09 16:00:42 compute-0 sudo[63185]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:43 compute-0 sudo[63377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjdmvyndmxhcvynnywsoclrtriuoxarr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296043.2237148-233-119391561288116/AnsiballZ_command.py'
Dec 09 16:00:43 compute-0 sudo[63377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:43 compute-0 python3.9[63379]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:00:43 compute-0 sudo[63377]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:44 compute-0 sudo[63530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdqubtdvzvnzviznclwjouqjjvqrzsto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296044.308838-247-24616177993874/AnsiballZ_stat.py'
Dec 09 16:00:44 compute-0 sudo[63530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:44 compute-0 python3.9[63532]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:44 compute-0 sudo[63530]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:45 compute-0 sudo[63655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlcvcuqmvqmdhyudadwjepwycyuqdnmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296044.308838-247-24616177993874/AnsiballZ_copy.py'
Dec 09 16:00:45 compute-0 sudo[63655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:45 compute-0 python3.9[63657]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296044.308838-247-24616177993874/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:45 compute-0 sudo[63655]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:46 compute-0 sudo[63808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjxpmdpzuhvrgqxtspiblnmtwooazxed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296045.8519378-262-160203595264081/AnsiballZ_systemd.py'
Dec 09 16:00:46 compute-0 sudo[63808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:46 compute-0 python3.9[63810]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:00:46 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 09 16:00:46 compute-0 sshd[1005]: Received SIGHUP; restarting.
Dec 09 16:00:46 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 09 16:00:46 compute-0 sshd[1005]: Server listening on :: port 22.
Dec 09 16:00:46 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 09 16:00:46 compute-0 sudo[63808]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:46 compute-0 sudo[63964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-altyedmbnsbeqfkeuanvfjtmqgwgkoga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296046.6887078-270-185675697276214/AnsiballZ_file.py'
Dec 09 16:00:46 compute-0 sudo[63964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:47 compute-0 python3.9[63966]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:47 compute-0 sudo[63964]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:47 compute-0 sudo[64116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhaenwpvwrdgalvevdvohrcsbbxriwgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296047.3921616-278-58247260619872/AnsiballZ_stat.py'
Dec 09 16:00:47 compute-0 sudo[64116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:47 compute-0 python3.9[64118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:47 compute-0 sudo[64116]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:48 compute-0 sudo[64239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmosmfuzlgkeskgcopjgqipctwspohxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296047.3921616-278-58247260619872/AnsiballZ_copy.py'
Dec 09 16:00:48 compute-0 sudo[64239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:48 compute-0 python3.9[64241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296047.3921616-278-58247260619872/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:48 compute-0 sudo[64239]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:49 compute-0 sudo[64391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvrlkcqvqpryiwixqaxqobidsfoyopjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296048.8495965-296-16026146084246/AnsiballZ_timezone.py'
Dec 09 16:00:49 compute-0 sudo[64391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:49 compute-0 python3.9[64393]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 09 16:00:49 compute-0 systemd[1]: Starting Time & Date Service...
Dec 09 16:00:49 compute-0 systemd[1]: Started Time & Date Service.
Dec 09 16:00:49 compute-0 sudo[64391]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:50 compute-0 sudo[64547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmwumhfpkuctogysewstgimivepaaod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296050.0012424-305-232094612144164/AnsiballZ_file.py'
Dec 09 16:00:50 compute-0 sudo[64547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:50 compute-0 python3.9[64549]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:50 compute-0 sudo[64547]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:51 compute-0 sudo[64699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkxzooilrmrdlawbgnqtynvyclwzldql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296050.8163261-313-38427946180754/AnsiballZ_stat.py'
Dec 09 16:00:51 compute-0 sudo[64699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:51 compute-0 python3.9[64701]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:51 compute-0 sudo[64699]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:51 compute-0 sudo[64822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niokurlyoiaopochrjznvbwsdmbdajmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296050.8163261-313-38427946180754/AnsiballZ_copy.py'
Dec 09 16:00:51 compute-0 sudo[64822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:51 compute-0 python3.9[64824]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296050.8163261-313-38427946180754/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:51 compute-0 sudo[64822]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:52 compute-0 sudo[64974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vilekfpkidxwuyekeylssjjhekclbfsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296052.1445496-328-190780954585021/AnsiballZ_stat.py'
Dec 09 16:00:52 compute-0 sudo[64974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:52 compute-0 python3.9[64976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:52 compute-0 sudo[64974]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:52 compute-0 sudo[65097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glesxttdxkysmnylnzernnqqnorindsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296052.1445496-328-190780954585021/AnsiballZ_copy.py'
Dec 09 16:00:52 compute-0 sudo[65097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:53 compute-0 python3.9[65099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296052.1445496-328-190780954585021/.source.yaml _original_basename=.k_tsmido follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:53 compute-0 sudo[65097]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:53 compute-0 sudo[65249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owgikkaiprqciknyojmsagjsmrlngfpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296053.261563-343-56440218508896/AnsiballZ_stat.py'
Dec 09 16:00:53 compute-0 sudo[65249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:53 compute-0 python3.9[65251]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:53 compute-0 sudo[65249]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:54 compute-0 sudo[65372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myxpvzzdtglyyztqgoeojcmlirsgwvdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296053.261563-343-56440218508896/AnsiballZ_copy.py'
Dec 09 16:00:54 compute-0 sudo[65372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:54 compute-0 python3.9[65374]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296053.261563-343-56440218508896/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:54 compute-0 sudo[65372]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:54 compute-0 sudo[65524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otuwxnpghasrybuiicnkgmmzlywekpvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296054.3449168-358-218538180227915/AnsiballZ_command.py'
Dec 09 16:00:54 compute-0 sudo[65524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:54 compute-0 python3.9[65526]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:00:54 compute-0 sudo[65524]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:55 compute-0 sudo[65677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocyhozaibfbrffhejcfoftpmumxykfht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296054.9942052-366-207276801022048/AnsiballZ_command.py'
Dec 09 16:00:55 compute-0 sudo[65677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:55 compute-0 python3.9[65679]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:00:55 compute-0 sudo[65677]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:56 compute-0 sudo[65830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lktldjypclnhehgkcsnrspjxhmmkskjf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765296055.6097052-374-203003471026808/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 16:00:56 compute-0 sudo[65830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:56 compute-0 python3[65832]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 16:00:56 compute-0 sudo[65830]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:56 compute-0 sudo[65982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgizjnlsmgwmbqfhdpjukvkdldqonrgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296056.4130268-382-137293690518249/AnsiballZ_stat.py'
Dec 09 16:00:56 compute-0 sudo[65982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:56 compute-0 python3.9[65984]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:56 compute-0 sudo[65982]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:57 compute-0 sudo[66105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mevfieppmlewyrjczwsmbabtilaojzut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296056.4130268-382-137293690518249/AnsiballZ_copy.py'
Dec 09 16:00:57 compute-0 sudo[66105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:57 compute-0 python3.9[66107]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296056.4130268-382-137293690518249/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:57 compute-0 sudo[66105]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:58 compute-0 sudo[66257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwjudyekrndubajlacpqropbokyqjjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296057.6430461-397-275632306152621/AnsiballZ_stat.py'
Dec 09 16:00:58 compute-0 sudo[66257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:58 compute-0 python3.9[66259]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:58 compute-0 sudo[66257]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:58 compute-0 sudo[66380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efyvnvkzxgawrzolfinbsqbzppcwrjlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296057.6430461-397-275632306152621/AnsiballZ_copy.py'
Dec 09 16:00:58 compute-0 sudo[66380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:58 compute-0 python3.9[66382]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296057.6430461-397-275632306152621/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:58 compute-0 sudo[66380]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:59 compute-0 sudo[66532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkpmkmtgsquskvntktavnuhsbqfilwvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296058.9200182-412-134580814964834/AnsiballZ_stat.py'
Dec 09 16:00:59 compute-0 sudo[66532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:59 compute-0 python3.9[66534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:00:59 compute-0 sudo[66532]: pam_unix(sudo:session): session closed for user root
Dec 09 16:00:59 compute-0 sudo[66655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feuxiirnafxrrijihviqtmmcbddqejxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296058.9200182-412-134580814964834/AnsiballZ_copy.py'
Dec 09 16:00:59 compute-0 sudo[66655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:00:59 compute-0 python3.9[66657]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296058.9200182-412-134580814964834/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:00:59 compute-0 sudo[66655]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:00 compute-0 sudo[66807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtutglxjbtuezacacjjvpkrkqfvnuylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296060.0660062-427-272650863808791/AnsiballZ_stat.py'
Dec 09 16:01:00 compute-0 sudo[66807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:00 compute-0 python3.9[66809]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:01:00 compute-0 sudo[66807]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:00 compute-0 sudo[66930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlbckouiebzmfhbkymeixvijhrwvakhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296060.0660062-427-272650863808791/AnsiballZ_copy.py'
Dec 09 16:01:00 compute-0 sudo[66930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:01 compute-0 python3.9[66932]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296060.0660062-427-272650863808791/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:01 compute-0 sudo[66930]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:01 compute-0 CROND[67033]: (root) CMD (run-parts /etc/cron.hourly)
Dec 09 16:01:01 compute-0 run-parts[67037]: (/etc/cron.hourly) starting 0anacron
Dec 09 16:01:01 compute-0 run-parts[67050]: (/etc/cron.hourly) finished 0anacron
Dec 09 16:01:01 compute-0 CROND[67032]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 09 16:01:01 compute-0 sudo[67093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohejcbggabiashkcysbrfkgbdteltmuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296061.3102221-442-65532184257010/AnsiballZ_stat.py'
Dec 09 16:01:01 compute-0 sudo[67093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:01 compute-0 python3.9[67095]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:01:01 compute-0 sudo[67093]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:02 compute-0 sudo[67216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsuvarbvpyroyizsxbufhpfjyaifdeex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296061.3102221-442-65532184257010/AnsiballZ_copy.py'
Dec 09 16:01:02 compute-0 sudo[67216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:02 compute-0 python3.9[67218]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296061.3102221-442-65532184257010/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:02 compute-0 sudo[67216]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:02 compute-0 sudo[67368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgddmgduogqwtmnhezswayapyptyubfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296062.5677404-457-228008852353329/AnsiballZ_file.py'
Dec 09 16:01:02 compute-0 sudo[67368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:03 compute-0 python3.9[67370]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:03 compute-0 sudo[67368]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:03 compute-0 sudo[67520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrkqbhxspiulvtwlrqksguvccznhdhnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296063.1958325-465-1948873717475/AnsiballZ_command.py'
Dec 09 16:01:03 compute-0 sudo[67520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:03 compute-0 python3.9[67522]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:01:03 compute-0 sudo[67520]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:04 compute-0 sudo[67679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faacyswpbkeqzhwxsqiegvkxcsdxvpwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296063.9365325-473-166338194768532/AnsiballZ_blockinfile.py'
Dec 09 16:01:04 compute-0 sudo[67679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:04 compute-0 python3.9[67681]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:04 compute-0 sudo[67679]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:05 compute-0 sudo[67832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjintevmpfjyksuzhrhytcbyhpjliall ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296064.8314657-482-43267914395203/AnsiballZ_file.py'
Dec 09 16:01:05 compute-0 sudo[67832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:05 compute-0 python3.9[67834]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:05 compute-0 sudo[67832]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:05 compute-0 sudo[67984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhdkimtyqddzbzidqszzaovnbibsugvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296065.4419956-482-76025736919762/AnsiballZ_file.py'
Dec 09 16:01:05 compute-0 sudo[67984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:05 compute-0 python3.9[67986]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:05 compute-0 sudo[67984]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:06 compute-0 sudo[68136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpsftwajxxmgniwgijclqjbqgfelxcxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296066.1043785-497-27234819956943/AnsiballZ_mount.py'
Dec 09 16:01:06 compute-0 sudo[68136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:06 compute-0 python3.9[68138]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 09 16:01:06 compute-0 sudo[68136]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:07 compute-0 sudo[68289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlgkigpzquydomqjikhcpttxvufyzwra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296067.0095706-497-208574454517173/AnsiballZ_mount.py'
Dec 09 16:01:07 compute-0 sudo[68289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:07 compute-0 python3.9[68291]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 09 16:01:07 compute-0 sudo[68289]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:07 compute-0 sshd-session[59076]: Connection closed by 192.168.122.30 port 41650
Dec 09 16:01:07 compute-0 sshd-session[59073]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:01:07 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 09 16:01:07 compute-0 systemd[1]: session-14.scope: Consumed 36.160s CPU time.
Dec 09 16:01:07 compute-0 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Dec 09 16:01:07 compute-0 systemd-logind[786]: Removed session 14.
Dec 09 16:01:11 compute-0 sshd-session[68317]: Invalid user admin from 146.190.31.45 port 38034
Dec 09 16:01:12 compute-0 sshd-session[68317]: Connection closed by invalid user admin 146.190.31.45 port 38034 [preauth]
Dec 09 16:01:13 compute-0 sshd-session[68319]: Accepted publickey for zuul from 192.168.122.30 port 35936 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:01:14 compute-0 systemd-logind[786]: New session 15 of user zuul.
Dec 09 16:01:14 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 09 16:01:14 compute-0 sshd-session[68319]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:01:14 compute-0 sudo[68472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwtxbextqdfvrqmnpjaocqukzpzhlnvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296074.1215599-16-253179802070141/AnsiballZ_tempfile.py'
Dec 09 16:01:14 compute-0 sudo[68472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:14 compute-0 python3.9[68474]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 09 16:01:14 compute-0 sudo[68472]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:15 compute-0 sudo[68624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xazhwimozhobvxgtvmdtlrocchigsrpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296074.9239428-28-147213587336494/AnsiballZ_stat.py'
Dec 09 16:01:15 compute-0 sudo[68624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:15 compute-0 python3.9[68626]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:01:15 compute-0 sudo[68624]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:16 compute-0 sudo[68776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpbducyigjpcemvantszmxolejtffmrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296075.6987767-38-264967653470774/AnsiballZ_setup.py'
Dec 09 16:01:16 compute-0 sudo[68776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:16 compute-0 python3.9[68778]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:01:16 compute-0 sudo[68776]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:17 compute-0 sudo[68928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfjocpeuokpaegxqwmmlyhmentfwzncf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296076.7067914-47-271723242474694/AnsiballZ_blockinfile.py'
Dec 09 16:01:17 compute-0 sudo[68928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:17 compute-0 python3.9[68930]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnS9g5EUNjTOnNw1/NVHuiTBWBT9IFzqVskWr/bH4K6HGCIi8LNq90yzJTZK561Sd/uYx7ignQywWdN6h7z4cZr9qv3Rg5CMqKNGI3Sg048aQjly0DCM+3Gz9Rv3pmAPxJFbJAQjjCgUNLfBLLfDxFUAkVAgqUg/ARpI5uIwxyZwq9vel6ajDd6a3tuXm0pB7Aj6McpSTsQ5wrCM2B1yntlCdJbxi44x5Jbq9kvLrDHHnx9KU1MNpbissJRJNAwoPoOuDssuzSqKOUX+6Ya3Nj7voprbIs+3BNo+8Aq8Q69gfCKZA7dqrB0bqiyI8Ydki6AsR/fharltZZhDNjNtKl88xiCZnFENR30EZcbzEMfhwMvvtAvW63JHSlTUovX71mnO5/nkm44DtIMgXcduA8NeG2zlteHjuKEdHzLWlQ1BrGKujEpNjVcM9Xz+i6EN3Gojh0+lwpBH+Pz//D/VDQLhqNVFYd7Tljzr7mpVR8zmnVp8H8lmLLeMbkom3Ni9M=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhYf7YoPese7yuteLiDPa2HkW82iyY0KjwCmBOU4Lns
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFdTc5nlDCIgIxXZgAZFe06AG23238fPcOdUL2uVInP9TXK6vel5Ou/ZAkkJ/5tJ5tAXqxYNejIamSGf87ZPE9s=
                                             create=True mode=0644 path=/tmp/ansible.ugldlh17 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:17 compute-0 sudo[68928]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:17 compute-0 sudo[69080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbnbieaaplxbkbkwzpjctiiycnqqclwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296077.4514468-55-87959007128696/AnsiballZ_command.py'
Dec 09 16:01:17 compute-0 sudo[69080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:18 compute-0 python3.9[69082]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ugldlh17' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:01:18 compute-0 sudo[69080]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:18 compute-0 sudo[69234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjbrmllphwenbdqbdateepfgtneyiuuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296078.3273723-63-38571729627191/AnsiballZ_file.py'
Dec 09 16:01:18 compute-0 sudo[69234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:18 compute-0 python3.9[69236]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ugldlh17 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:18 compute-0 sudo[69234]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:19 compute-0 sshd-session[68322]: Connection closed by 192.168.122.30 port 35936
Dec 09 16:01:19 compute-0 sshd-session[68319]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:01:19 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 09 16:01:19 compute-0 systemd[1]: session-15.scope: Consumed 3.212s CPU time.
Dec 09 16:01:19 compute-0 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Dec 09 16:01:19 compute-0 systemd-logind[786]: Removed session 15.
Dec 09 16:01:19 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 09 16:01:24 compute-0 sshd-session[69263]: Accepted publickey for zuul from 192.168.122.30 port 43554 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:01:24 compute-0 systemd-logind[786]: New session 16 of user zuul.
Dec 09 16:01:24 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 09 16:01:24 compute-0 sshd-session[69263]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:01:25 compute-0 python3.9[69416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:01:27 compute-0 sudo[69570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqtlaalosazogulxjocymzustxdvzrgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296087.3255012-32-163789910988036/AnsiballZ_systemd.py'
Dec 09 16:01:27 compute-0 sudo[69570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:28 compute-0 python3.9[69572]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 09 16:01:28 compute-0 sudo[69570]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:28 compute-0 sudo[69724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vivpbestgopajzubjopjlplaoxctyhjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296088.4663076-40-60141891027989/AnsiballZ_systemd.py'
Dec 09 16:01:28 compute-0 sudo[69724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:29 compute-0 python3.9[69726]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:01:29 compute-0 sudo[69724]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:29 compute-0 sudo[69877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdjznvurnrivyhwylqhptdoqresgidpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296089.2954745-49-274723392937696/AnsiballZ_command.py'
Dec 09 16:01:29 compute-0 sudo[69877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:29 compute-0 python3.9[69879]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:01:29 compute-0 sudo[69877]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:30 compute-0 sudo[70030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugkorzpillswxmuslrglgmxggzycluwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296090.1105604-57-97114431715835/AnsiballZ_stat.py'
Dec 09 16:01:30 compute-0 sudo[70030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:30 compute-0 python3.9[70032]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:01:30 compute-0 sudo[70030]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:31 compute-0 sudo[70184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzcjioexvgthgmemsmbqxjfjlomzqhav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296090.8912537-65-220217426970778/AnsiballZ_command.py'
Dec 09 16:01:31 compute-0 sudo[70184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:31 compute-0 python3.9[70186]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:01:31 compute-0 sudo[70184]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:32 compute-0 sudo[70339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhdqwdgdftvayztnhtxrkagatcytvlsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296091.5872436-73-12209232166915/AnsiballZ_file.py'
Dec 09 16:01:32 compute-0 sudo[70339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:32 compute-0 python3.9[70341]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:01:32 compute-0 sudo[70339]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:32 compute-0 sshd-session[69266]: Connection closed by 192.168.122.30 port 43554
Dec 09 16:01:32 compute-0 sshd-session[69263]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:01:32 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 09 16:01:32 compute-0 systemd[1]: session-16.scope: Consumed 4.464s CPU time.
Dec 09 16:01:32 compute-0 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Dec 09 16:01:32 compute-0 systemd-logind[786]: Removed session 16.
Dec 09 16:01:38 compute-0 sshd-session[70367]: Accepted publickey for zuul from 192.168.122.30 port 40424 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:01:38 compute-0 systemd-logind[786]: New session 17 of user zuul.
Dec 09 16:01:38 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 09 16:01:38 compute-0 sshd-session[70367]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:01:39 compute-0 python3.9[70520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:01:40 compute-0 sudo[70674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iunpxnuqerrheiajowhyvetsdenkfstk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296100.591488-34-25768507213029/AnsiballZ_setup.py'
Dec 09 16:01:40 compute-0 sudo[70674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:41 compute-0 python3.9[70676]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:01:41 compute-0 sudo[70674]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:41 compute-0 sudo[70758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edcrtwxqkjfzznvhblsylacgjovzdaud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296100.591488-34-25768507213029/AnsiballZ_dnf.py'
Dec 09 16:01:41 compute-0 sudo[70758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:42 compute-0 python3.9[70760]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 16:01:43 compute-0 sudo[70758]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:44 compute-0 python3.9[70911]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:01:46 compute-0 python3.9[71062]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 16:01:46 compute-0 python3.9[71212]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:01:46 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:01:46 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:01:47 compute-0 python3.9[71363]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:01:47 compute-0 sshd-session[70370]: Connection closed by 192.168.122.30 port 40424
Dec 09 16:01:47 compute-0 sshd-session[70367]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:01:47 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 09 16:01:47 compute-0 systemd[1]: session-17.scope: Consumed 6.081s CPU time.
Dec 09 16:01:47 compute-0 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Dec 09 16:01:47 compute-0 systemd-logind[786]: Removed session 17.
Dec 09 16:01:55 compute-0 sshd-session[71388]: Accepted publickey for zuul from 38.102.83.236 port 40256 ssh2: RSA SHA256:Hm0y35I6QsPK80/qTWUGGvHfgip63xl7qy6rvlCkCac
Dec 09 16:01:55 compute-0 systemd-logind[786]: New session 18 of user zuul.
Dec 09 16:01:55 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 09 16:01:55 compute-0 sshd-session[71388]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:01:55 compute-0 sudo[71464]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miqioubuagphtodbodracluhrtmmtdmn ; /usr/bin/python3'
Dec 09 16:01:55 compute-0 sudo[71464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:55 compute-0 useradd[71468]: new group: name=ceph-admin, GID=42478
Dec 09 16:01:55 compute-0 useradd[71468]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 09 16:01:56 compute-0 sudo[71464]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:56 compute-0 sudo[71550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiytahvqfycuwjkrczsrvoonpsmntfzc ; /usr/bin/python3'
Dec 09 16:01:56 compute-0 sudo[71550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:56 compute-0 sudo[71550]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:56 compute-0 sudo[71623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhvrmssvwnbyjchnwwfvwxfribbnftks ; /usr/bin/python3'
Dec 09 16:01:56 compute-0 sudo[71623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:56 compute-0 sudo[71623]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:57 compute-0 sudo[71673]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-madzoebseqyvczailoqpkmtpjpwnjcwg ; /usr/bin/python3'
Dec 09 16:01:57 compute-0 sudo[71673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:57 compute-0 sudo[71673]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:57 compute-0 sudo[71699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psaztelkljlkirkzlwzpaavpxymqfzmx ; /usr/bin/python3'
Dec 09 16:01:57 compute-0 sudo[71699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:57 compute-0 sudo[71699]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:57 compute-0 sudo[71725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kankmrnnhevdddgvxtfodajlixykvexo ; /usr/bin/python3'
Dec 09 16:01:57 compute-0 sudo[71725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:58 compute-0 sudo[71725]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:58 compute-0 sshd-session[71728]: Invalid user admin from 146.190.31.45 port 56116
Dec 09 16:01:58 compute-0 sudo[71753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fayilcyqoxefrevjzvggmhcbjhbrwaaf ; /usr/bin/python3'
Dec 09 16:01:58 compute-0 sudo[71753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:58 compute-0 sshd-session[71728]: Connection closed by invalid user admin 146.190.31.45 port 56116 [preauth]
Dec 09 16:01:58 compute-0 sudo[71753]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:58 compute-0 sudo[71831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srqlxygsbsahngglarnzpnejkvgtvnjr ; /usr/bin/python3'
Dec 09 16:01:58 compute-0 sudo[71831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:59 compute-0 sudo[71831]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:59 compute-0 sudo[71904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odctbrvafleitfewtcyfqxpsjxcxhvph ; /usr/bin/python3'
Dec 09 16:01:59 compute-0 sudo[71904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:59 compute-0 sudo[71904]: pam_unix(sudo:session): session closed for user root
Dec 09 16:01:59 compute-0 sudo[72006]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mguvhttmmaqburmqqbrirjnuoyvivwiz ; /usr/bin/python3'
Dec 09 16:01:59 compute-0 sudo[72006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:01:59 compute-0 sudo[72006]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:00 compute-0 sudo[72079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgacjifoimwctvbgcrconvrlnheltdou ; /usr/bin/python3'
Dec 09 16:02:00 compute-0 sudo[72079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:00 compute-0 sudo[72079]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:00 compute-0 sudo[72129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnbuprcsmtopjroqopvhuuceivcczoaw ; /usr/bin/python3'
Dec 09 16:02:00 compute-0 sudo[72129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:00 compute-0 python3[72131]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:02:01 compute-0 sudo[72129]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:02 compute-0 sudo[72224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mslenafgrlhflzczodlkzendhlbqeqdy ; /usr/bin/python3'
Dec 09 16:02:02 compute-0 sudo[72224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:02 compute-0 python3[72226]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 16:02:03 compute-0 sudo[72224]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:03 compute-0 sudo[72251]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bazssxflydeumncvepbojwkxlehfrlze ; /usr/bin/python3'
Dec 09 16:02:03 compute-0 sudo[72251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:04 compute-0 python3[72253]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 16:02:04 compute-0 sudo[72251]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:04 compute-0 sudo[72277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deysiggalxwbdxuwkjmwbvhktssiljde ; /usr/bin/python3'
Dec 09 16:02:04 compute-0 sudo[72277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:04 compute-0 python3[72279]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:02:04 compute-0 kernel: loop: module loaded
Dec 09 16:02:04 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 09 16:02:04 compute-0 sudo[72277]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:04 compute-0 sudo[72312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hepywqvntyqlnhbiadcpoaepbbzbwaom ; /usr/bin/python3'
Dec 09 16:02:04 compute-0 sudo[72312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:04 compute-0 python3[72314]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:02:04 compute-0 lvm[72317]: PV /dev/loop3 not used.
Dec 09 16:02:05 compute-0 lvm[72326]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:02:05 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 09 16:02:05 compute-0 lvm[72328]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 09 16:02:05 compute-0 sudo[72312]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:05 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 09 16:02:06 compute-0 sudo[72404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khmzicvaxedztuwiettfbqgxalckfqiu ; /usr/bin/python3'
Dec 09 16:02:06 compute-0 sudo[72404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:06 compute-0 python3[72406]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:02:06 compute-0 sudo[72404]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:06 compute-0 sudo[72477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgeajftgetkfyjpzpgswouylaziafefl ; /usr/bin/python3'
Dec 09 16:02:06 compute-0 sudo[72477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:06 compute-0 python3[72479]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296125.248717-36369-265499890421866/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:02:06 compute-0 sudo[72477]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:07 compute-0 sudo[72527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjnknhdepcokqezafnjgkubmkvnmiqe ; /usr/bin/python3'
Dec 09 16:02:07 compute-0 sudo[72527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:07 compute-0 python3[72529]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:02:07 compute-0 systemd[1]: Reloading.
Dec 09 16:02:07 compute-0 systemd-rc-local-generator[72549]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:02:07 compute-0 systemd-sysv-generator[72557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:02:07 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 09 16:02:07 compute-0 bash[72568]: /dev/loop3: [64513]:4327748 (/var/lib/ceph-osd-0.img)
Dec 09 16:02:07 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 09 16:02:07 compute-0 sudo[72527]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:07 compute-0 lvm[72570]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:02:07 compute-0 lvm[72570]: VG ceph_vg0 finished
Dec 09 16:02:08 compute-0 sudo[72594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmximvtnjltrdcjrmbylhbvywdibzurj ; /usr/bin/python3'
Dec 09 16:02:08 compute-0 sudo[72594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:08 compute-0 python3[72596]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 16:02:09 compute-0 sudo[72594]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:09 compute-0 sudo[72621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gokvlpcfxsmdsvegmvpjbgimkoiunmld ; /usr/bin/python3'
Dec 09 16:02:09 compute-0 sudo[72621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:09 compute-0 python3[72623]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 16:02:09 compute-0 sudo[72621]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:10 compute-0 sudo[72647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-patdxuzqkabteintzvhoyidjyrpleizh ; /usr/bin/python3'
Dec 09 16:02:10 compute-0 sudo[72647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:10 compute-0 python3[72649]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:02:10 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec 09 16:02:10 compute-0 sudo[72647]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:10 compute-0 sudo[72678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzaivbyaqzxzxsdqdjhvagzanjuxueey ; /usr/bin/python3'
Dec 09 16:02:10 compute-0 sudo[72678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:10 compute-0 python3[72680]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:02:10 compute-0 lvm[72683]: PV /dev/loop4 not used.
Dec 09 16:02:10 compute-0 lvm[72685]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:02:10 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 09 16:02:10 compute-0 lvm[72693]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 09 16:02:10 compute-0 lvm[72696]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:02:10 compute-0 lvm[72696]: VG ceph_vg1 finished
Dec 09 16:02:10 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 09 16:02:10 compute-0 sudo[72678]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:11 compute-0 sudo[72772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxymvjenwvuobplhahbvpqbeopquwjjl ; /usr/bin/python3'
Dec 09 16:02:11 compute-0 sudo[72772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:11 compute-0 python3[72774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:02:11 compute-0 sudo[72772]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:11 compute-0 sudo[72845]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkdybydjwoaxknwwtlwhpogivuspnxlf ; /usr/bin/python3'
Dec 09 16:02:11 compute-0 sudo[72845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:11 compute-0 python3[72847]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296130.9755628-36396-122112534512863/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:02:11 compute-0 sudo[72845]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:11 compute-0 sudo[72895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dokdqdqrktjoymvvawrdmifvnmtlkuqc ; /usr/bin/python3'
Dec 09 16:02:11 compute-0 sudo[72895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:12 compute-0 python3[72897]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:02:12 compute-0 systemd[1]: Reloading.
Dec 09 16:02:12 compute-0 chronyd[58592]: Selected source 162.159.200.1 (pool.ntp.org)
Dec 09 16:02:12 compute-0 systemd-sysv-generator[72929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:02:12 compute-0 systemd-rc-local-generator[72926]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:02:12 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 09 16:02:12 compute-0 bash[72937]: /dev/loop4: [64513]:4327913 (/var/lib/ceph-osd-1.img)
Dec 09 16:02:12 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 09 16:02:12 compute-0 sudo[72895]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:12 compute-0 lvm[72939]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:02:12 compute-0 lvm[72939]: VG ceph_vg1 finished
Dec 09 16:02:12 compute-0 sudo[72963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkjvyeigmdgvriaorjxwmafteqfhqlxy ; /usr/bin/python3'
Dec 09 16:02:12 compute-0 sudo[72963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:12 compute-0 python3[72965]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 16:02:14 compute-0 sudo[72963]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:14 compute-0 sudo[72990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvzosqepmjrxqmdbsgkzemmvjazasssw ; /usr/bin/python3'
Dec 09 16:02:14 compute-0 sudo[72990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:14 compute-0 python3[72992]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 16:02:14 compute-0 sudo[72990]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:14 compute-0 sudo[73016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ervffjpoqzsfqwfjbdvotguyswhrfqnc ; /usr/bin/python3'
Dec 09 16:02:14 compute-0 sudo[73016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:14 compute-0 python3[73018]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:02:14 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec 09 16:02:14 compute-0 sudo[73016]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:15 compute-0 sudo[73047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlrcnvagakizbeedrfddhpirdvjpbzvn ; /usr/bin/python3'
Dec 09 16:02:15 compute-0 sudo[73047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:15 compute-0 python3[73049]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:02:15 compute-0 lvm[73052]: PV /dev/loop5 not used.
Dec 09 16:02:15 compute-0 lvm[73054]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:02:15 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 09 16:02:15 compute-0 lvm[73065]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:02:15 compute-0 lvm[73065]: VG ceph_vg2 finished
Dec 09 16:02:15 compute-0 lvm[73063]:   1 logical volume(s) in volume group "ceph_vg2" now active
Dec 09 16:02:15 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 09 16:02:15 compute-0 sudo[73047]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:15 compute-0 sudo[73141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uziqwkcdgyxltcdmubmgbonxeyxdlqxv ; /usr/bin/python3'
Dec 09 16:02:15 compute-0 sudo[73141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:15 compute-0 python3[73143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:02:15 compute-0 sudo[73141]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:16 compute-0 sudo[73214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdmytxkaeavyyaolfrptdcanbrwrzunu ; /usr/bin/python3'
Dec 09 16:02:16 compute-0 sudo[73214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:16 compute-0 python3[73216]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296135.6762328-36423-250732711997141/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:02:16 compute-0 sudo[73214]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:16 compute-0 sudo[73264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-filbfvwiwssxhimvlzdczsosbykiuptr ; /usr/bin/python3'
Dec 09 16:02:16 compute-0 sudo[73264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:16 compute-0 python3[73266]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:02:17 compute-0 systemd[1]: Reloading.
Dec 09 16:02:18 compute-0 systemd-rc-local-generator[73291]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:02:18 compute-0 systemd-sysv-generator[73298]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:02:18 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 09 16:02:18 compute-0 bash[73305]: /dev/loop5: [64513]:4327775 (/var/lib/ceph-osd-2.img)
Dec 09 16:02:18 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 09 16:02:18 compute-0 sudo[73264]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:18 compute-0 lvm[73307]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:02:18 compute-0 lvm[73307]: VG ceph_vg2 finished
Dec 09 16:02:20 compute-0 python3[73331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:02:22 compute-0 sudo[73422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgqjodpqicxqryswifzlqtjmpduiuoro ; /usr/bin/python3'
Dec 09 16:02:22 compute-0 sudo[73422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:22 compute-0 python3[73424]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 16:02:25 compute-0 sudo[73422]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:25 compute-0 sudo[73480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgxcfpvedkbnvszxppjgijcpwcwimtzu ; /usr/bin/python3'
Dec 09 16:02:25 compute-0 sudo[73480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:25 compute-0 python3[73482]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 16:02:30 compute-0 groupadd[73492]: group added to /etc/group: name=cephadm, GID=992
Dec 09 16:02:30 compute-0 groupadd[73492]: group added to /etc/gshadow: name=cephadm
Dec 09 16:02:30 compute-0 groupadd[73492]: new group: name=cephadm, GID=992
Dec 09 16:02:30 compute-0 useradd[73499]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 09 16:02:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 16:02:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 16:02:31 compute-0 sudo[73480]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:31 compute-0 sudo[73598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vojhzqvlpggpbbzmuthpinnqtgatdawh ; /usr/bin/python3'
Dec 09 16:02:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 16:02:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 16:02:31 compute-0 systemd[1]: run-r28d7dde3cb0a43c984488fb9979899e9.service: Deactivated successfully.
Dec 09 16:02:31 compute-0 sudo[73598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:31 compute-0 python3[73601]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 16:02:31 compute-0 sudo[73598]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:32 compute-0 sudo[73627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpoqqqlctypgzhoxkdktpvoaujitrdde ; /usr/bin/python3'
Dec 09 16:02:32 compute-0 sudo[73627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:32 compute-0 python3[73629]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:02:32 compute-0 sudo[73627]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:32 compute-0 sudo[73665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbkftiqlbxtmiloyqcxjakldiysfrwx ; /usr/bin/python3'
Dec 09 16:02:32 compute-0 sudo[73665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:33 compute-0 python3[73667]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:02:33 compute-0 sudo[73665]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:33 compute-0 sudo[73691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dldxwxhceenkmlczhekrujyprrhobhbz ; /usr/bin/python3'
Dec 09 16:02:33 compute-0 sudo[73691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:33 compute-0 python3[73693]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:02:33 compute-0 sudo[73691]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:34 compute-0 sudo[73769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghpxblldkbrxwjagcshhfkqcwfsbcctr ; /usr/bin/python3'
Dec 09 16:02:34 compute-0 sudo[73769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:34 compute-0 python3[73771]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:02:34 compute-0 sudo[73769]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:34 compute-0 sudo[73842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irzzkoatqnnhxotynlsvzfjraujywksq ; /usr/bin/python3'
Dec 09 16:02:34 compute-0 sudo[73842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:34 compute-0 python3[73844]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296153.86737-36573-274011143777893/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:02:34 compute-0 sudo[73842]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:35 compute-0 sudo[73944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rijetiuvkoyvddserxaupwhnadqwwwro ; /usr/bin/python3'
Dec 09 16:02:35 compute-0 sudo[73944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:35 compute-0 python3[73946]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:02:35 compute-0 sudo[73944]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:35 compute-0 sudo[74017]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usaznaryvdnrmhablwofciqxvdtnvcgv ; /usr/bin/python3'
Dec 09 16:02:35 compute-0 sudo[74017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:35 compute-0 python3[74019]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296155.114591-36591-141746806533973/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:02:35 compute-0 sudo[74017]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:36 compute-0 sudo[74067]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkebrjjjxdpcsmuhdubvswkgorvynnsy ; /usr/bin/python3'
Dec 09 16:02:36 compute-0 sudo[74067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:36 compute-0 python3[74069]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 16:02:36 compute-0 sudo[74067]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:36 compute-0 sudo[74095]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjrwyyiiudspgvahgxalfreswzlkuiao ; /usr/bin/python3'
Dec 09 16:02:36 compute-0 sudo[74095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:36 compute-0 python3[74097]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 16:02:36 compute-0 sudo[74095]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:36 compute-0 sudo[74123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wttzztgivzjymmatlveykjxsnxtrbnpn ; /usr/bin/python3'
Dec 09 16:02:36 compute-0 sudo[74123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:36 compute-0 python3[74125]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 16:02:37 compute-0 sudo[74123]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:37 compute-0 sudo[74151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fheihrgrdivccyveczviihmtqrqqbupz ; /usr/bin/python3'
Dec 09 16:02:37 compute-0 sudo[74151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:02:37 compute-0 python3[74153]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:02:37 compute-0 sshd-session[74157]: Accepted publickey for ceph-admin from 192.168.122.100 port 49098 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:02:37 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 09 16:02:37 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 09 16:02:37 compute-0 systemd-logind[786]: New session 19 of user ceph-admin.
Dec 09 16:02:37 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 09 16:02:37 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 09 16:02:37 compute-0 systemd[74161]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:02:37 compute-0 systemd[74161]: Queued start job for default target Main User Target.
Dec 09 16:02:37 compute-0 systemd[74161]: Created slice User Application Slice.
Dec 09 16:02:37 compute-0 systemd[74161]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 09 16:02:37 compute-0 systemd[74161]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 16:02:37 compute-0 systemd[74161]: Reached target Paths.
Dec 09 16:02:37 compute-0 systemd[74161]: Reached target Timers.
Dec 09 16:02:37 compute-0 systemd[74161]: Starting D-Bus User Message Bus Socket...
Dec 09 16:02:37 compute-0 systemd[74161]: Starting Create User's Volatile Files and Directories...
Dec 09 16:02:37 compute-0 systemd[74161]: Finished Create User's Volatile Files and Directories.
Dec 09 16:02:37 compute-0 systemd[74161]: Listening on D-Bus User Message Bus Socket.
Dec 09 16:02:37 compute-0 systemd[74161]: Reached target Sockets.
Dec 09 16:02:37 compute-0 systemd[74161]: Reached target Basic System.
Dec 09 16:02:37 compute-0 systemd[74161]: Reached target Main User Target.
Dec 09 16:02:37 compute-0 systemd[74161]: Startup finished in 130ms.
Dec 09 16:02:37 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 09 16:02:37 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 09 16:02:37 compute-0 sshd-session[74157]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:02:37 compute-0 sudo[74177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 09 16:02:37 compute-0 sudo[74177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:02:37 compute-0 sudo[74177]: pam_unix(sudo:session): session closed for user root
Dec 09 16:02:37 compute-0 sshd-session[74176]: Received disconnect from 192.168.122.100 port 49098:11: disconnected by user
Dec 09 16:02:37 compute-0 sshd-session[74176]: Disconnected from user ceph-admin 192.168.122.100 port 49098
Dec 09 16:02:38 compute-0 sshd-session[74157]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 16:02:38 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 09 16:02:38 compute-0 systemd-logind[786]: Session 19 logged out. Waiting for processes to exit.
Dec 09 16:02:38 compute-0 systemd-logind[786]: Removed session 19.
Dec 09 16:02:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:02:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1706569392-lower\x2dmapped.mount: Deactivated successfully.
Dec 09 16:02:44 compute-0 sshd-session[74295]: Invalid user admin from 146.190.31.45 port 59272
Dec 09 16:02:44 compute-0 sshd-session[74295]: Connection closed by invalid user admin 146.190.31.45 port 59272 [preauth]
Dec 09 16:02:48 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 09 16:02:48 compute-0 systemd[74161]: Activating special unit Exit the Session...
Dec 09 16:02:48 compute-0 systemd[74161]: Stopped target Main User Target.
Dec 09 16:02:48 compute-0 systemd[74161]: Stopped target Basic System.
Dec 09 16:02:48 compute-0 systemd[74161]: Stopped target Paths.
Dec 09 16:02:48 compute-0 systemd[74161]: Stopped target Sockets.
Dec 09 16:02:48 compute-0 systemd[74161]: Stopped target Timers.
Dec 09 16:02:48 compute-0 systemd[74161]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 09 16:02:48 compute-0 systemd[74161]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 09 16:02:48 compute-0 systemd[74161]: Closed D-Bus User Message Bus Socket.
Dec 09 16:02:48 compute-0 systemd[74161]: Stopped Create User's Volatile Files and Directories.
Dec 09 16:02:48 compute-0 systemd[74161]: Removed slice User Application Slice.
Dec 09 16:02:48 compute-0 systemd[74161]: Reached target Shutdown.
Dec 09 16:02:48 compute-0 systemd[74161]: Finished Exit the Session.
Dec 09 16:02:48 compute-0 systemd[74161]: Reached target Exit the Session.
Dec 09 16:02:48 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 09 16:02:48 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 09 16:02:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 09 16:02:48 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 09 16:02:48 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 09 16:02:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 09 16:02:48 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 09 16:02:55 compute-0 sshd-session[74318]: Connection closed by 45.148.10.121 port 42156 [preauth]
Dec 09 16:02:57 compute-0 podman[74253]: 2025-12-09 16:02:57.238885 +0000 UTC m=+18.941388402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:02:57 compute-0 podman[74344]: 2025-12-09 16:02:57.316108231 +0000 UTC m=+0.049361288 container create 73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a (image=quay.io/ceph/ceph:v20, name=serene_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:02:57 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 09 16:02:57 compute-0 systemd[1]: Started libpod-conmon-73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a.scope.
Dec 09 16:02:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:02:57 compute-0 podman[74344]: 2025-12-09 16:02:57.295659368 +0000 UTC m=+0.028912455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:02:57 compute-0 podman[74344]: 2025-12-09 16:02:57.398240593 +0000 UTC m=+0.131493700 container init 73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a (image=quay.io/ceph/ceph:v20, name=serene_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:02:57 compute-0 podman[74344]: 2025-12-09 16:02:57.40900555 +0000 UTC m=+0.142258607 container start 73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a (image=quay.io/ceph/ceph:v20, name=serene_poincare, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:02:57 compute-0 podman[74344]: 2025-12-09 16:02:57.412467188 +0000 UTC m=+0.145720305 container attach 73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a (image=quay.io/ceph/ceph:v20, name=serene_poincare, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:02:57 compute-0 serene_poincare[74361]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 09 16:02:57 compute-0 systemd[1]: libpod-73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a.scope: Deactivated successfully.
Dec 09 16:02:57 compute-0 conmon[74361]: conmon 73fff0cbf0a70c4681bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a.scope/container/memory.events
Dec 09 16:02:57 compute-0 podman[74344]: 2025-12-09 16:02:57.511087 +0000 UTC m=+0.244340067 container died 73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a (image=quay.io/ceph/ceph:v20, name=serene_poincare, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc98ca239d401b44b348325e4de3805399e788d8919e158cbc6f6b317ef2021e-merged.mount: Deactivated successfully.
Dec 09 16:02:57 compute-0 podman[74344]: 2025-12-09 16:02:57.56020026 +0000 UTC m=+0.293453317 container remove 73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a (image=quay.io/ceph/ceph:v20, name=serene_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:02:57 compute-0 systemd[1]: libpod-conmon-73fff0cbf0a70c4681bc43272f456a1244af4821703c0030e571127a1a49908a.scope: Deactivated successfully.
Dec 09 16:02:57 compute-0 podman[74378]: 2025-12-09 16:02:57.630759762 +0000 UTC m=+0.046100895 container create c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974 (image=quay.io/ceph/ceph:v20, name=exciting_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:02:57 compute-0 systemd[1]: Started libpod-conmon-c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974.scope.
Dec 09 16:02:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:02:57 compute-0 podman[74378]: 2025-12-09 16:02:57.690793684 +0000 UTC m=+0.106134827 container init c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974 (image=quay.io/ceph/ceph:v20, name=exciting_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 16:02:57 compute-0 podman[74378]: 2025-12-09 16:02:57.69945557 +0000 UTC m=+0.114796703 container start c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974 (image=quay.io/ceph/ceph:v20, name=exciting_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:02:57 compute-0 exciting_bohr[74394]: 167 167
Dec 09 16:02:57 compute-0 podman[74378]: 2025-12-09 16:02:57.60931303 +0000 UTC m=+0.024654183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:02:57 compute-0 systemd[1]: libpod-c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974.scope: Deactivated successfully.
Dec 09 16:02:57 compute-0 podman[74378]: 2025-12-09 16:02:57.707106889 +0000 UTC m=+0.122448042 container attach c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974 (image=quay.io/ceph/ceph:v20, name=exciting_bohr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:02:57 compute-0 conmon[74394]: conmon c92de2b0ba1ac4300eec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974.scope/container/memory.events
Dec 09 16:02:57 compute-0 podman[74378]: 2025-12-09 16:02:57.708426476 +0000 UTC m=+0.123767609 container died c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974 (image=quay.io/ceph/ceph:v20, name=exciting_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:02:57 compute-0 podman[74378]: 2025-12-09 16:02:57.738052321 +0000 UTC m=+0.153393454 container remove c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974 (image=quay.io/ceph/ceph:v20, name=exciting_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 09 16:02:57 compute-0 systemd[1]: libpod-conmon-c92de2b0ba1ac4300eec069e748622d8f66c2cf08044e2f7b47f929cca82a974.scope: Deactivated successfully.
Dec 09 16:02:57 compute-0 podman[74411]: 2025-12-09 16:02:57.796793676 +0000 UTC m=+0.038517110 container create 8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f (image=quay.io/ceph/ceph:v20, name=gracious_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:02:57 compute-0 systemd[1]: Started libpod-conmon-8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f.scope.
Dec 09 16:02:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:02:57 compute-0 podman[74411]: 2025-12-09 16:02:57.852287208 +0000 UTC m=+0.094010662 container init 8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f (image=quay.io/ceph/ceph:v20, name=gracious_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 09 16:02:57 compute-0 podman[74411]: 2025-12-09 16:02:57.857175507 +0000 UTC m=+0.098898941 container start 8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f (image=quay.io/ceph/ceph:v20, name=gracious_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:02:57 compute-0 podman[74411]: 2025-12-09 16:02:57.860824721 +0000 UTC m=+0.102548175 container attach 8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f (image=quay.io/ceph/ceph:v20, name=gracious_sinoussi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:02:57 compute-0 podman[74411]: 2025-12-09 16:02:57.779862003 +0000 UTC m=+0.021585467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:02:57 compute-0 gracious_sinoussi[74428]: AQAxSDhpMJRRNBAAOzj9q2iXcWSn6TUXjCNCOA==
Dec 09 16:02:57 compute-0 systemd[1]: libpod-8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f.scope: Deactivated successfully.
Dec 09 16:02:57 compute-0 podman[74411]: 2025-12-09 16:02:57.880991716 +0000 UTC m=+0.122715150 container died 8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f (image=quay.io/ceph/ceph:v20, name=gracious_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:02:57 compute-0 podman[74411]: 2025-12-09 16:02:57.919571686 +0000 UTC m=+0.161295120 container remove 8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f (image=quay.io/ceph/ceph:v20, name=gracious_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:02:57 compute-0 systemd[1]: libpod-conmon-8b0a58917eedf799d90be56f7a8d375f34183e8824cbc66fe3e086659e2c401f.scope: Deactivated successfully.
Dec 09 16:02:58 compute-0 podman[74446]: 2025-12-09 16:02:58.005169826 +0000 UTC m=+0.058622372 container create de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db (image=quay.io/ceph/ceph:v20, name=great_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 09 16:02:58 compute-0 systemd[1]: Started libpod-conmon-de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db.scope.
Dec 09 16:02:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:02:58 compute-0 podman[74446]: 2025-12-09 16:02:58.05932089 +0000 UTC m=+0.112773486 container init de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db (image=quay.io/ceph/ceph:v20, name=great_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:02:58 compute-0 podman[74446]: 2025-12-09 16:02:58.06633981 +0000 UTC m=+0.119792366 container start de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db (image=quay.io/ceph/ceph:v20, name=great_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:02:58 compute-0 podman[74446]: 2025-12-09 16:02:58.069708706 +0000 UTC m=+0.123161252 container attach de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db (image=quay.io/ceph/ceph:v20, name=great_johnson, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:02:58 compute-0 podman[74446]: 2025-12-09 16:02:57.981548903 +0000 UTC m=+0.035001459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:02:58 compute-0 great_johnson[74462]: AQAySDhprjB0BRAANtxEoeq98fHk1C+akAvgoQ==
Dec 09 16:02:58 compute-0 systemd[1]: libpod-de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db.scope: Deactivated successfully.
Dec 09 16:02:58 compute-0 podman[74446]: 2025-12-09 16:02:58.09546104 +0000 UTC m=+0.148913586 container died de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db (image=quay.io/ceph/ceph:v20, name=great_johnson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:02:58 compute-0 podman[74446]: 2025-12-09 16:02:58.129035338 +0000 UTC m=+0.182487884 container remove de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db (image=quay.io/ceph/ceph:v20, name=great_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 16:02:58 compute-0 systemd[1]: libpod-conmon-de2347ca1b9969fecddd29afac7e3b46c8b29958ac8a65b8584360a1fd1f10db.scope: Deactivated successfully.
Dec 09 16:02:58 compute-0 podman[74483]: 2025-12-09 16:02:58.195091001 +0000 UTC m=+0.044545981 container create 2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d (image=quay.io/ceph/ceph:v20, name=agitated_mcclintock, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:02:58 compute-0 systemd[1]: Started libpod-conmon-2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d.scope.
Dec 09 16:02:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:02:58 compute-0 podman[74483]: 2025-12-09 16:02:58.176084729 +0000 UTC m=+0.025539739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:02:58 compute-0 podman[74483]: 2025-12-09 16:02:58.515523307 +0000 UTC m=+0.364978307 container init 2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d (image=quay.io/ceph/ceph:v20, name=agitated_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:02:58 compute-0 podman[74483]: 2025-12-09 16:02:58.522538207 +0000 UTC m=+0.371993187 container start 2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d (image=quay.io/ceph/ceph:v20, name=agitated_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:02:58 compute-0 agitated_mcclintock[74500]: AQAySDhpySlxIBAATXc5MgtXPyEZN9Oms9a9SA==
Dec 09 16:02:58 compute-0 systemd[1]: libpod-2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d.scope: Deactivated successfully.
Dec 09 16:02:58 compute-0 podman[74483]: 2025-12-09 16:02:58.597781312 +0000 UTC m=+0.447236322 container attach 2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d (image=quay.io/ceph/ceph:v20, name=agitated_mcclintock, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:02:58 compute-0 podman[74483]: 2025-12-09 16:02:58.598344508 +0000 UTC m=+0.447799498 container died 2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d (image=quay.io/ceph/ceph:v20, name=agitated_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90d8818c29d329de8a87b4cc066e96c409bb4870e696153e8d3cb4326e7a67d-merged.mount: Deactivated successfully.
Dec 09 16:02:59 compute-0 podman[74483]: 2025-12-09 16:02:59.333564979 +0000 UTC m=+1.183019979 container remove 2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d (image=quay.io/ceph/ceph:v20, name=agitated_mcclintock, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:02:59 compute-0 systemd[1]: libpod-conmon-2712f36b38f9c75e9764d6b08324b2ef20d048916f788d90fffce5e4f08c228d.scope: Deactivated successfully.
Dec 09 16:02:59 compute-0 podman[74521]: 2025-12-09 16:02:59.414074335 +0000 UTC m=+0.051133729 container create c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406 (image=quay.io/ceph/ceph:v20, name=nice_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:02:59 compute-0 systemd[1]: Started libpod-conmon-c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406.scope.
Dec 09 16:02:59 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d17f2258c6474e5145cbf4283f0309c08e07d3d56f1d93637a4d102f91fa70/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 09 16:02:59 compute-0 podman[74521]: 2025-12-09 16:02:59.391613454 +0000 UTC m=+0.028672878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:02:59 compute-0 podman[74521]: 2025-12-09 16:02:59.492663775 +0000 UTC m=+0.129723179 container init c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406 (image=quay.io/ceph/ceph:v20, name=nice_tu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:02:59 compute-0 podman[74521]: 2025-12-09 16:02:59.503220556 +0000 UTC m=+0.140279980 container start c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406 (image=quay.io/ceph/ceph:v20, name=nice_tu, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:02:59 compute-0 podman[74521]: 2025-12-09 16:02:59.506628223 +0000 UTC m=+0.143687627 container attach c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406 (image=quay.io/ceph/ceph:v20, name=nice_tu, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:02:59 compute-0 nice_tu[74537]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 09 16:02:59 compute-0 nice_tu[74537]: setting min_mon_release = tentacle
Dec 09 16:02:59 compute-0 nice_tu[74537]: /usr/bin/monmaptool: set fsid to 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:02:59 compute-0 nice_tu[74537]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 09 16:02:59 compute-0 systemd[1]: libpod-c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406.scope: Deactivated successfully.
Dec 09 16:02:59 compute-0 podman[74521]: 2025-12-09 16:02:59.534341883 +0000 UTC m=+0.171401317 container died c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406 (image=quay.io/ceph/ceph:v20, name=nice_tu, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:02:59 compute-0 podman[74521]: 2025-12-09 16:02:59.574744665 +0000 UTC m=+0.211804099 container remove c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406 (image=quay.io/ceph/ceph:v20, name=nice_tu, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:02:59 compute-0 systemd[1]: libpod-conmon-c71f9cdde7b15548b82979d720646c049418a55059ee8d4c4a3790e4b220f406.scope: Deactivated successfully.
Dec 09 16:02:59 compute-0 podman[74555]: 2025-12-09 16:02:59.664197686 +0000 UTC m=+0.059817067 container create 089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7 (image=quay.io/ceph/ceph:v20, name=epic_franklin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:02:59 compute-0 systemd[1]: Started libpod-conmon-089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7.scope.
Dec 09 16:02:59 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95e83d4e8ee7ff97c5fd3634eb3724d181da00d60e24f430bf8bfed52006b02/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95e83d4e8ee7ff97c5fd3634eb3724d181da00d60e24f430bf8bfed52006b02/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 09 16:02:59 compute-0 podman[74555]: 2025-12-09 16:02:59.631190655 +0000 UTC m=+0.026810106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95e83d4e8ee7ff97c5fd3634eb3724d181da00d60e24f430bf8bfed52006b02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95e83d4e8ee7ff97c5fd3634eb3724d181da00d60e24f430bf8bfed52006b02/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:02:59 compute-0 podman[74555]: 2025-12-09 16:02:59.741773997 +0000 UTC m=+0.137393378 container init 089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7 (image=quay.io/ceph/ceph:v20, name=epic_franklin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:02:59 compute-0 podman[74555]: 2025-12-09 16:02:59.747390497 +0000 UTC m=+0.143009858 container start 089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7 (image=quay.io/ceph/ceph:v20, name=epic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:02:59 compute-0 podman[74555]: 2025-12-09 16:02:59.751063782 +0000 UTC m=+0.146683143 container attach 089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7 (image=quay.io/ceph/ceph:v20, name=epic_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:02:59 compute-0 systemd[1]: libpod-089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7.scope: Deactivated successfully.
Dec 09 16:02:59 compute-0 podman[74555]: 2025-12-09 16:02:59.852490264 +0000 UTC m=+0.248109625 container died 089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7 (image=quay.io/ceph/ceph:v20, name=epic_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:02:59 compute-0 podman[74555]: 2025-12-09 16:02:59.893000209 +0000 UTC m=+0.288619570 container remove 089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7 (image=quay.io/ceph/ceph:v20, name=epic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:02:59 compute-0 systemd[1]: libpod-conmon-089c8fb4a7c862fdb056b9d16b7f2c21e7ad2d81c7d7f1b067dc6ab93e9efab7.scope: Deactivated successfully.
Dec 09 16:02:59 compute-0 systemd[1]: Reloading.
Dec 09 16:03:00 compute-0 systemd-rc-local-generator[74640]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:00 compute-0 systemd-sysv-generator[74644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d17f2258c6474e5145cbf4283f0309c08e07d3d56f1d93637a4d102f91fa70-merged.mount: Deactivated successfully.
Dec 09 16:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:03:00 compute-0 systemd[1]: Reloading.
Dec 09 16:03:00 compute-0 systemd-sysv-generator[74678]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:00 compute-0 systemd-rc-local-generator[74675]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:00 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 09 16:03:00 compute-0 systemd[1]: Reloading.
Dec 09 16:03:00 compute-0 systemd-rc-local-generator[74712]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:00 compute-0 systemd-sysv-generator[74715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:00 compute-0 systemd[1]: Reached target Ceph cluster 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:00 compute-0 systemd[1]: Reloading.
Dec 09 16:03:00 compute-0 systemd-rc-local-generator[74749]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:00 compute-0 systemd-sysv-generator[74753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:01 compute-0 systemd[1]: Reloading.
Dec 09 16:03:01 compute-0 systemd-sysv-generator[74793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:01 compute-0 systemd-rc-local-generator[74788]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:01 compute-0 systemd[1]: Created slice Slice /system/ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:01 compute-0 systemd[1]: Reached target System Time Set.
Dec 09 16:03:01 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 09 16:03:01 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:03:01 compute-0 podman[74847]: 2025-12-09 16:03:01.597190134 +0000 UTC m=+0.048792072 container create 47b75c7e1d669f3472e43aedf1538924b572574dd219d7e5f3f9ef1ce8f59e7f (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b778a57d82853619bf3847283bc6cfbcab2d2bf877cb1bc7d00c94bfce61e7d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b778a57d82853619bf3847283bc6cfbcab2d2bf877cb1bc7d00c94bfce61e7d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b778a57d82853619bf3847283bc6cfbcab2d2bf877cb1bc7d00c94bfce61e7d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b778a57d82853619bf3847283bc6cfbcab2d2bf877cb1bc7d00c94bfce61e7d1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:01 compute-0 podman[74847]: 2025-12-09 16:03:01.575437494 +0000 UTC m=+0.027039462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:01 compute-0 podman[74847]: 2025-12-09 16:03:01.671893124 +0000 UTC m=+0.123495082 container init 47b75c7e1d669f3472e43aedf1538924b572574dd219d7e5f3f9ef1ce8f59e7f (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:01 compute-0 podman[74847]: 2025-12-09 16:03:01.68296439 +0000 UTC m=+0.134566328 container start 47b75c7e1d669f3472e43aedf1538924b572574dd219d7e5f3f9ef1ce8f59e7f (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:03:01 compute-0 bash[74847]: 47b75c7e1d669f3472e43aedf1538924b572574dd219d7e5f3f9ef1ce8f59e7f
Dec 09 16:03:01 compute-0 systemd[1]: Started Ceph mon.compute-0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:01 compute-0 ceph-mon[74866]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:03:01 compute-0 ceph-mon[74866]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: pidfile_write: ignore empty --pid-file
Dec 09 16:03:01 compute-0 ceph-mon[74866]: load: jerasure load: lrc 
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: RocksDB version: 7.9.2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Git sha 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: DB SUMMARY
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: DB Session ID:  WBJH1Q62RAJB3J3ATQGY
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: CURRENT file:  CURRENT
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                         Options.error_if_exists: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                       Options.create_if_missing: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                                     Options.env: 0x55e71c6f5440
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                                Options.info_log: 0x55e71ea453e0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                              Options.statistics: (nil)
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                               Options.use_fsync: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                              Options.db_log_dir: 
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                                 Options.wal_dir: 
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                    Options.write_buffer_manager: 0x55e71e9c4140
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.unordered_write: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                               Options.row_cache: None
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                              Options.wal_filter: None
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.two_write_queues: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.wal_compression: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.atomic_flush: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.max_background_jobs: 2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.max_background_compactions: -1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.max_subcompactions: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.max_total_wal_size: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                          Options.max_open_files: -1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:       Options.compaction_readahead_size: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Compression algorithms supported:
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         kZSTD supported: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         kXpressCompression supported: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         kBZip2Compression supported: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         kLZ4Compression supported: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         kZlibCompression supported: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         kSnappyCompression supported: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:           Options.merge_operator: 
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:        Options.compaction_filter: None
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e71e9d0700)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e71e9b58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:        Options.write_buffer_size: 33554432
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:  Options.max_write_buffer_number: 2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:          Options.compression: NoCompression
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.num_levels: 7
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 592c5a45-08c3-40c7-974d-53c403a6ec6c
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296181743715, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296181746132, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "WBJH1Q62RAJB3J3ATQGY", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296181746309, "job": 1, "event": "recovery_finished"}
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e71e9e2e00
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: DB pointer 0x55e71eb2e000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:03:01 compute-0 ceph-mon[74866]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e71e9b58d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 09 16:03:01 compute-0 ceph-mon[74866]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@-1(???) e0 preinit fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 09 16:03:01 compute-0 ceph-mon[74866]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 09 16:03:01 compute-0 ceph-mon[74866]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : last_changed 2025-12-09T16:02:59.529794+0000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : created 2025-12-09T16:02:59.529794+0000
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2025-12-09T16:02:59.794614Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:03:01 compute-0 podman[74867]: 2025-12-09 16:03:01.782656922 +0000 UTC m=+0.054307189 container create 84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79 (image=quay.io/ceph/ceph:v20, name=hardcore_murdock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).mds e1 new map
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-09T16:03:01:781860+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : fsmap 
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mkfs 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 09 16:03:01 compute-0 ceph-mon[74866]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 09 16:03:01 compute-0 ceph-mon[74866]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 16:03:01 compute-0 systemd[1]: Started libpod-conmon-84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79.scope.
Dec 09 16:03:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db64a12e60c25ea6e1b4791727d82bd5d06b40d7f63486f552ae9fd560e40763/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db64a12e60c25ea6e1b4791727d82bd5d06b40d7f63486f552ae9fd560e40763/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db64a12e60c25ea6e1b4791727d82bd5d06b40d7f63486f552ae9fd560e40763/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:01 compute-0 podman[74867]: 2025-12-09 16:03:01.76399212 +0000 UTC m=+0.035642407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:01 compute-0 podman[74867]: 2025-12-09 16:03:01.867843001 +0000 UTC m=+0.139493288 container init 84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79 (image=quay.io/ceph/ceph:v20, name=hardcore_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:03:01 compute-0 podman[74867]: 2025-12-09 16:03:01.875974743 +0000 UTC m=+0.147625000 container start 84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79 (image=quay.io/ceph/ceph:v20, name=hardcore_murdock, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:01 compute-0 podman[74867]: 2025-12-09 16:03:01.879347669 +0000 UTC m=+0.150997936 container attach 84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79 (image=quay.io/ceph/ceph:v20, name=hardcore_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:02 compute-0 ceph-mon[74866]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 09 16:03:02 compute-0 ceph-mon[74866]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638432760' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:   cluster:
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     id:     67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     health: HEALTH_OK
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:  
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:   services:
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     mon: 1 daemons, quorum compute-0 (age 0.342738s) [leader: compute-0]
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     mgr: no daemons active
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     osd: 0 osds: 0 up, 0 in
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:  
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:   data:
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     pools:   0 pools, 0 pgs
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     objects: 0 objects, 0 B
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     usage:   0 B used, 0 B / 0 B avail
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:     pgs:     
Dec 09 16:03:02 compute-0 hardcore_murdock[74921]:  
Dec 09 16:03:02 compute-0 systemd[1]: libpod-84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79.scope: Deactivated successfully.
Dec 09 16:03:02 compute-0 podman[74867]: 2025-12-09 16:03:02.141106261 +0000 UTC m=+0.412756538 container died 84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79 (image=quay.io/ceph/ceph:v20, name=hardcore_murdock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:03:02 compute-0 podman[74867]: 2025-12-09 16:03:02.190806798 +0000 UTC m=+0.462457095 container remove 84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79 (image=quay.io/ceph/ceph:v20, name=hardcore_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:03:02 compute-0 systemd[1]: libpod-conmon-84534ff8a4924a9fe4f8ffff80c0f6bb03089b08d5cd17ec401d2266cbc8ba79.scope: Deactivated successfully.
Dec 09 16:03:02 compute-0 podman[74957]: 2025-12-09 16:03:02.27540926 +0000 UTC m=+0.058930251 container create baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e (image=quay.io/ceph/ceph:v20, name=vigilant_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:02 compute-0 systemd[1]: Started libpod-conmon-baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e.scope.
Dec 09 16:03:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:02 compute-0 podman[74957]: 2025-12-09 16:03:02.251127128 +0000 UTC m=+0.034648129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2934f30b82f2960414bfc9c99c034b1961c7a7f003bee254c4975dfd45cda28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2934f30b82f2960414bfc9c99c034b1961c7a7f003bee254c4975dfd45cda28/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2934f30b82f2960414bfc9c99c034b1961c7a7f003bee254c4975dfd45cda28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2934f30b82f2960414bfc9c99c034b1961c7a7f003bee254c4975dfd45cda28/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:02 compute-0 podman[74957]: 2025-12-09 16:03:02.362266967 +0000 UTC m=+0.145787948 container init baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e (image=quay.io/ceph/ceph:v20, name=vigilant_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:03:02 compute-0 podman[74957]: 2025-12-09 16:03:02.37185456 +0000 UTC m=+0.155375581 container start baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e (image=quay.io/ceph/ceph:v20, name=vigilant_lamarr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:02 compute-0 podman[74957]: 2025-12-09 16:03:02.376027189 +0000 UTC m=+0.159548170 container attach baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e (image=quay.io/ceph/ceph:v20, name=vigilant_lamarr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 09 16:03:02 compute-0 ceph-mon[74866]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 09 16:03:02 compute-0 ceph-mon[74866]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2137345085' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 09 16:03:02 compute-0 ceph-mon[74866]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2137345085' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 09 16:03:02 compute-0 vigilant_lamarr[74973]: 
Dec 09 16:03:02 compute-0 vigilant_lamarr[74973]: [global]
Dec 09 16:03:02 compute-0 vigilant_lamarr[74973]:         fsid = 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:02 compute-0 vigilant_lamarr[74973]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 09 16:03:02 compute-0 vigilant_lamarr[74973]:         osd_crush_chooseleaf_type = 0
Dec 09 16:03:02 compute-0 systemd[1]: libpod-baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e.scope: Deactivated successfully.
Dec 09 16:03:02 compute-0 podman[74957]: 2025-12-09 16:03:02.607350714 +0000 UTC m=+0.390871745 container died baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e (image=quay.io/ceph/ceph:v20, name=vigilant_lamarr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2934f30b82f2960414bfc9c99c034b1961c7a7f003bee254c4975dfd45cda28-merged.mount: Deactivated successfully.
Dec 09 16:03:02 compute-0 podman[74957]: 2025-12-09 16:03:02.663569497 +0000 UTC m=+0.447090478 container remove baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e (image=quay.io/ceph/ceph:v20, name=vigilant_lamarr, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:03:02 compute-0 systemd[1]: libpod-conmon-baaa836ce872a0ef099a5c5378155d0cc16557a66a0b8e5554b0159cf12f382e.scope: Deactivated successfully.
Dec 09 16:03:02 compute-0 podman[75010]: 2025-12-09 16:03:02.739388658 +0000 UTC m=+0.049408009 container create d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58 (image=quay.io/ceph/ceph:v20, name=quirky_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:03:02 compute-0 systemd[1]: Started libpod-conmon-d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58.scope.
Dec 09 16:03:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95817b1cd5c5f9ddb9148d3841adfda0072baafe7b03d1029a71808f28a13406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95817b1cd5c5f9ddb9148d3841adfda0072baafe7b03d1029a71808f28a13406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95817b1cd5c5f9ddb9148d3841adfda0072baafe7b03d1029a71808f28a13406/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95817b1cd5c5f9ddb9148d3841adfda0072baafe7b03d1029a71808f28a13406/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:02 compute-0 ceph-mon[74866]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 16:03:02 compute-0 ceph-mon[74866]: monmap epoch 1
Dec 09 16:03:02 compute-0 ceph-mon[74866]: fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:02 compute-0 ceph-mon[74866]: last_changed 2025-12-09T16:02:59.529794+0000
Dec 09 16:03:02 compute-0 ceph-mon[74866]: created 2025-12-09T16:02:59.529794+0000
Dec 09 16:03:02 compute-0 ceph-mon[74866]: min_mon_release 20 (tentacle)
Dec 09 16:03:02 compute-0 ceph-mon[74866]: election_strategy: 1
Dec 09 16:03:02 compute-0 ceph-mon[74866]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 16:03:02 compute-0 ceph-mon[74866]: fsmap 
Dec 09 16:03:02 compute-0 ceph-mon[74866]: osdmap e1: 0 total, 0 up, 0 in
Dec 09 16:03:02 compute-0 ceph-mon[74866]: mgrmap e1: no daemons active
Dec 09 16:03:02 compute-0 ceph-mon[74866]: from='client.? 192.168.122.100:0/2638432760' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 09 16:03:02 compute-0 ceph-mon[74866]: from='client.? 192.168.122.100:0/2137345085' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 09 16:03:02 compute-0 ceph-mon[74866]: from='client.? 192.168.122.100:0/2137345085' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 09 16:03:02 compute-0 podman[75010]: 2025-12-09 16:03:02.717121924 +0000 UTC m=+0.027141285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:02 compute-0 podman[75010]: 2025-12-09 16:03:02.81450618 +0000 UTC m=+0.124525551 container init d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58 (image=quay.io/ceph/ceph:v20, name=quirky_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:02 compute-0 podman[75010]: 2025-12-09 16:03:02.822288572 +0000 UTC m=+0.132307913 container start d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58 (image=quay.io/ceph/ceph:v20, name=quirky_ardinghelli, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:02 compute-0 podman[75010]: 2025-12-09 16:03:02.826186933 +0000 UTC m=+0.136206324 container attach d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58 (image=quay.io/ceph/ceph:v20, name=quirky_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 16:03:03 compute-0 ceph-mon[74866]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:03 compute-0 ceph-mon[74866]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588226206' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:03 compute-0 systemd[1]: libpod-d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58.scope: Deactivated successfully.
Dec 09 16:03:03 compute-0 podman[75052]: 2025-12-09 16:03:03.076831149 +0000 UTC m=+0.034597817 container died d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58 (image=quay.io/ceph/ceph:v20, name=quirky_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-95817b1cd5c5f9ddb9148d3841adfda0072baafe7b03d1029a71808f28a13406-merged.mount: Deactivated successfully.
Dec 09 16:03:03 compute-0 podman[75052]: 2025-12-09 16:03:03.124711144 +0000 UTC m=+0.082477752 container remove d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58 (image=quay.io/ceph/ceph:v20, name=quirky_ardinghelli, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:03 compute-0 systemd[1]: libpod-conmon-d1a305c06e3671a74b8408d17011ad62a9192f925961a07a09421ac1acf02f58.scope: Deactivated successfully.
Dec 09 16:03:03 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:03:03 compute-0 ceph-mon[74866]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 09 16:03:03 compute-0 ceph-mon[74866]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 09 16:03:03 compute-0 ceph-mon[74866]: mon.compute-0@0(leader) e1 shutdown
Dec 09 16:03:03 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0[74862]: 2025-12-09T16:03:03.380+0000 7f43ff6c3640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 09 16:03:03 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0[74862]: 2025-12-09T16:03:03.380+0000 7f43ff6c3640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 09 16:03:03 compute-0 ceph-mon[74866]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 09 16:03:03 compute-0 ceph-mon[74866]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 09 16:03:03 compute-0 podman[75097]: 2025-12-09 16:03:03.647023675 +0000 UTC m=+0.313542210 container died 47b75c7e1d669f3472e43aedf1538924b572574dd219d7e5f3f9ef1ce8f59e7f (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b778a57d82853619bf3847283bc6cfbcab2d2bf877cb1bc7d00c94bfce61e7d1-merged.mount: Deactivated successfully.
Dec 09 16:03:03 compute-0 podman[75097]: 2025-12-09 16:03:03.684122033 +0000 UTC m=+0.350640568 container remove 47b75c7e1d669f3472e43aedf1538924b572574dd219d7e5f3f9ef1ce8f59e7f (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:03:03 compute-0 bash[75097]: ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0
Dec 09 16:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 16:03:03 compute-0 systemd[1]: ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@mon.compute-0.service: Deactivated successfully.
Dec 09 16:03:03 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:03 compute-0 systemd[1]: ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@mon.compute-0.service: Consumed 1.035s CPU time.
Dec 09 16:03:03 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:03:04 compute-0 podman[75202]: 2025-12-09 16:03:04.074621365 +0000 UTC m=+0.060997160 container create 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899f00770b450d8fd8d68af8f0fb6aa81333156e58e00294e9f2d7080ad8954c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899f00770b450d8fd8d68af8f0fb6aa81333156e58e00294e9f2d7080ad8954c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899f00770b450d8fd8d68af8f0fb6aa81333156e58e00294e9f2d7080ad8954c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899f00770b450d8fd8d68af8f0fb6aa81333156e58e00294e9f2d7080ad8954c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 podman[75202]: 2025-12-09 16:03:04.136321364 +0000 UTC m=+0.122697239 container init 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:03:04 compute-0 podman[75202]: 2025-12-09 16:03:04.047566563 +0000 UTC m=+0.033942438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:04 compute-0 podman[75202]: 2025-12-09 16:03:04.144836727 +0000 UTC m=+0.131212542 container start 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:04 compute-0 bash[75202]: 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6
Dec 09 16:03:04 compute-0 systemd[1]: Started Ceph mon.compute-0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:04 compute-0 ceph-mon[75222]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 09 16:03:04 compute-0 ceph-mon[75222]: pidfile_write: ignore empty --pid-file
Dec 09 16:03:04 compute-0 ceph-mon[75222]: load: jerasure load: lrc 
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: RocksDB version: 7.9.2
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Git sha 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: DB SUMMARY
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: DB Session ID:  ANVSLO0IM0FKQE2HIWIF
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: CURRENT file:  CURRENT
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60237 ; 
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                         Options.error_if_exists: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                       Options.create_if_missing: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                                     Options.env: 0x55ad053dd440
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                                Options.info_log: 0x55ad05eb9e80
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                              Options.statistics: (nil)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                               Options.use_fsync: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                              Options.db_log_dir: 
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                                 Options.wal_dir: 
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                    Options.write_buffer_manager: 0x55ad05f04140
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.unordered_write: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                               Options.row_cache: None
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                              Options.wal_filter: None
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.two_write_queues: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.wal_compression: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.atomic_flush: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.max_background_jobs: 2
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.max_background_compactions: -1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.max_subcompactions: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.max_total_wal_size: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                          Options.max_open_files: -1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:       Options.compaction_readahead_size: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Compression algorithms supported:
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         kZSTD supported: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         kXpressCompression supported: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         kBZip2Compression supported: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         kLZ4Compression supported: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         kZlibCompression supported: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         kSnappyCompression supported: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:           Options.merge_operator: 
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:        Options.compaction_filter: None
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ad05f10a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ad05ef58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:        Options.write_buffer_size: 33554432
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:  Options.max_write_buffer_number: 2
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:          Options.compression: NoCompression
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.num_levels: 7
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 592c5a45-08c3-40c7-974d-53c403a6ec6c
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296184209525, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296184214039, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58436, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55788, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296184, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296184214219, "job": 1, "event": "recovery_finished"}
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ad05f22e00
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: DB pointer 0x55ad0606c000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:03:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ad05ef58d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 09 16:03:04 compute-0 ceph-mon[75222]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(???) e1 preinit fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(???).mds e1 new map
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-09T16:03:01:781860+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 09 16:03:04 compute-0 ceph-mon[75222]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : last_changed 2025-12-09T16:02:59.529794+0000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : created 2025-12-09T16:02:59.529794+0000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : fsmap 
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 09 16:03:04 compute-0 podman[75223]: 2025-12-09 16:03:04.246346251 +0000 UTC m=+0.060010002 container create fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 09 16:03:04 compute-0 systemd[1]: Started libpod-conmon-fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613.scope.
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: monmap epoch 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:04 compute-0 ceph-mon[75222]: last_changed 2025-12-09T16:02:59.529794+0000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: created 2025-12-09T16:02:59.529794+0000
Dec 09 16:03:04 compute-0 ceph-mon[75222]: min_mon_release 20 (tentacle)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: election_strategy: 1
Dec 09 16:03:04 compute-0 ceph-mon[75222]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 16:03:04 compute-0 ceph-mon[75222]: fsmap 
Dec 09 16:03:04 compute-0 ceph-mon[75222]: osdmap e1: 0 total, 0 up, 0 in
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mgrmap e1: no daemons active
Dec 09 16:03:04 compute-0 podman[75223]: 2025-12-09 16:03:04.216234682 +0000 UTC m=+0.029898423 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:04 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb26add32ea26a8d61b31d1b648b1dc84bd708f0c7d32ac5e701c148add9a3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb26add32ea26a8d61b31d1b648b1dc84bd708f0c7d32ac5e701c148add9a3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb26add32ea26a8d61b31d1b648b1dc84bd708f0c7d32ac5e701c148add9a3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 podman[75223]: 2025-12-09 16:03:04.366379983 +0000 UTC m=+0.180043764 container init fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:04 compute-0 podman[75223]: 2025-12-09 16:03:04.376923873 +0000 UTC m=+0.190587584 container start fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:03:04 compute-0 podman[75223]: 2025-12-09 16:03:04.380384152 +0000 UTC m=+0.194047863 container attach fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 09 16:03:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 09 16:03:04 compute-0 systemd[1]: libpod-fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613.scope: Deactivated successfully.
Dec 09 16:03:04 compute-0 podman[75223]: 2025-12-09 16:03:04.626622032 +0000 UTC m=+0.440285743 container died fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:04 compute-0 podman[75223]: 2025-12-09 16:03:04.663284117 +0000 UTC m=+0.476947818 container remove fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:04 compute-0 systemd[1]: libpod-conmon-fea5a4de3c7a2c52bb3e2f5df3b068e35daa7525e3273c40182d904b0d235613.scope: Deactivated successfully.
Dec 09 16:03:04 compute-0 podman[75316]: 2025-12-09 16:03:04.737532714 +0000 UTC m=+0.045148238 container create c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7 (image=quay.io/ceph/ceph:v20, name=great_wescoff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:04 compute-0 systemd[1]: Started libpod-conmon-c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7.scope.
Dec 09 16:03:04 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b055d6f15ce1bd395ef88450d8e870d204af7efb232634406dcf9912135bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b055d6f15ce1bd395ef88450d8e870d204af7efb232634406dcf9912135bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b055d6f15ce1bd395ef88450d8e870d204af7efb232634406dcf9912135bf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:04 compute-0 podman[75316]: 2025-12-09 16:03:04.810768282 +0000 UTC m=+0.118383816 container init c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7 (image=quay.io/ceph/ceph:v20, name=great_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 16:03:04 compute-0 podman[75316]: 2025-12-09 16:03:04.718226984 +0000 UTC m=+0.025842548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:04 compute-0 podman[75316]: 2025-12-09 16:03:04.823473595 +0000 UTC m=+0.131089159 container start c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7 (image=quay.io/ceph/ceph:v20, name=great_wescoff, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:04 compute-0 podman[75316]: 2025-12-09 16:03:04.828449906 +0000 UTC m=+0.136065470 container attach c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7 (image=quay.io/ceph/ceph:v20, name=great_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:03:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 09 16:03:05 compute-0 systemd[1]: libpod-c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7.scope: Deactivated successfully.
Dec 09 16:03:05 compute-0 podman[75316]: 2025-12-09 16:03:05.056124917 +0000 UTC m=+0.363740451 container died c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7 (image=quay.io/ceph/ceph:v20, name=great_wescoff, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-348b055d6f15ce1bd395ef88450d8e870d204af7efb232634406dcf9912135bf-merged.mount: Deactivated successfully.
Dec 09 16:03:05 compute-0 podman[75316]: 2025-12-09 16:03:05.096481098 +0000 UTC m=+0.404096622 container remove c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7 (image=quay.io/ceph/ceph:v20, name=great_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:03:05 compute-0 systemd[1]: libpod-conmon-c6ac6059933da19e28fd3d644e6626f92f78fce2efff370b52baad63a21c82b7.scope: Deactivated successfully.
Dec 09 16:03:05 compute-0 systemd[1]: Reloading.
Dec 09 16:03:05 compute-0 systemd-rc-local-generator[75401]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:05 compute-0 systemd-sysv-generator[75405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:05 compute-0 systemd[1]: Reloading.
Dec 09 16:03:05 compute-0 systemd-sysv-generator[75442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:05 compute-0 systemd-rc-local-generator[75439]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:05 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ysegzv for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:03:06 compute-0 podman[75496]: 2025-12-09 16:03:06.025077472 +0000 UTC m=+0.062567525 container create f232def5bd3d41fdf0b35f628fe45f0e39d35b90e0d04b3d069f81dcb3d82662 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37c0497699fb8f415b55c05c3051410023af17b3b8cbf5af0fa2e2206b776b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37c0497699fb8f415b55c05c3051410023af17b3b8cbf5af0fa2e2206b776b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37c0497699fb8f415b55c05c3051410023af17b3b8cbf5af0fa2e2206b776b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37c0497699fb8f415b55c05c3051410023af17b3b8cbf5af0fa2e2206b776b2/merged/var/lib/ceph/mgr/ceph-compute-0.ysegzv supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:06 compute-0 podman[75496]: 2025-12-09 16:03:06.092324179 +0000 UTC m=+0.129814242 container init f232def5bd3d41fdf0b35f628fe45f0e39d35b90e0d04b3d069f81dcb3d82662 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:06 compute-0 podman[75496]: 2025-12-09 16:03:05.998858475 +0000 UTC m=+0.036348558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:06 compute-0 podman[75496]: 2025-12-09 16:03:06.097450015 +0000 UTC m=+0.134940058 container start f232def5bd3d41fdf0b35f628fe45f0e39d35b90e0d04b3d069f81dcb3d82662 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 09 16:03:06 compute-0 bash[75496]: f232def5bd3d41fdf0b35f628fe45f0e39d35b90e0d04b3d069f81dcb3d82662
Dec 09 16:03:06 compute-0 systemd[1]: Started Ceph mgr.compute-0.ysegzv for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:06 compute-0 ceph-mgr[75515]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:03:06 compute-0 ceph-mgr[75515]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 09 16:03:06 compute-0 ceph-mgr[75515]: pidfile_write: ignore empty --pid-file
Dec 09 16:03:06 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'alerts'
Dec 09 16:03:06 compute-0 podman[75516]: 2025-12-09 16:03:06.194504152 +0000 UTC m=+0.053191027 container create ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5 (image=quay.io/ceph/ceph:v20, name=amazing_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:06 compute-0 systemd[1]: Started libpod-conmon-ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5.scope.
Dec 09 16:03:06 compute-0 podman[75516]: 2025-12-09 16:03:06.173169974 +0000 UTC m=+0.031856829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:06 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c5dbc78df1c999f371a91e3790901b20c2af2b3c690e2c435024c2fdc4f8781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c5dbc78df1c999f371a91e3790901b20c2af2b3c690e2c435024c2fdc4f8781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c5dbc78df1c999f371a91e3790901b20c2af2b3c690e2c435024c2fdc4f8781/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:06 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'balancer'
Dec 09 16:03:06 compute-0 podman[75516]: 2025-12-09 16:03:06.31297743 +0000 UTC m=+0.171664305 container init ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5 (image=quay.io/ceph/ceph:v20, name=amazing_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:03:06 compute-0 podman[75516]: 2025-12-09 16:03:06.322764699 +0000 UTC m=+0.181451534 container start ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5 (image=quay.io/ceph/ceph:v20, name=amazing_hodgkin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:03:06 compute-0 podman[75516]: 2025-12-09 16:03:06.326888137 +0000 UTC m=+0.185575022 container attach ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5 (image=quay.io/ceph/ceph:v20, name=amazing_hodgkin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 16:03:06 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'cephadm'
Dec 09 16:03:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 16:03:06 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893489142' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]: 
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]: {
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "health": {
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "status": "HEALTH_OK",
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "checks": {},
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "mutes": []
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     },
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "election_epoch": 5,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "quorum": [
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         0
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     ],
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "quorum_names": [
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "compute-0"
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     ],
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "quorum_age": 2,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "monmap": {
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "epoch": 1,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "min_mon_release_name": "tentacle",
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_mons": 1
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     },
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "osdmap": {
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "epoch": 1,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_osds": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_up_osds": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "osd_up_since": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_in_osds": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "osd_in_since": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_remapped_pgs": 0
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     },
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "pgmap": {
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "pgs_by_state": [],
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_pgs": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_pools": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_objects": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "data_bytes": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "bytes_used": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "bytes_avail": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "bytes_total": 0
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     },
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "fsmap": {
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "epoch": 1,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "btime": "2025-12-09T16:03:01:781860+0000",
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "by_rank": [],
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "up:standby": 0
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     },
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "mgrmap": {
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "available": false,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "num_standbys": 0,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "modules": [
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:             "iostat",
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:             "nfs"
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         ],
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "services": {}
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     },
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "servicemap": {
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "epoch": 1,
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "modified": "2025-12-09T16:03:01.783688+0000",
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:         "services": {}
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     },
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]:     "progress_events": {}
Dec 09 16:03:06 compute-0 amazing_hodgkin[75549]: }
Dec 09 16:03:06 compute-0 systemd[1]: libpod-ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5.scope: Deactivated successfully.
Dec 09 16:03:06 compute-0 podman[75516]: 2025-12-09 16:03:06.545421027 +0000 UTC m=+0.404107902 container died ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5 (image=quay.io/ceph/ceph:v20, name=amazing_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c5dbc78df1c999f371a91e3790901b20c2af2b3c690e2c435024c2fdc4f8781-merged.mount: Deactivated successfully.
Dec 09 16:03:06 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/893489142' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:06 compute-0 podman[75516]: 2025-12-09 16:03:06.683287878 +0000 UTC m=+0.541974713 container remove ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5 (image=quay.io/ceph/ceph:v20, name=amazing_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:03:06 compute-0 systemd[1]: libpod-conmon-ae7acea7b16f27be101a915226797c0d8806e753e5451b86d372dd4b45c05df5.scope: Deactivated successfully.
Dec 09 16:03:07 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'crash'
Dec 09 16:03:07 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'dashboard'
Dec 09 16:03:08 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'devicehealth'
Dec 09 16:03:08 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'diskprediction_local'
Dec 09 16:03:08 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 09 16:03:08 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 09 16:03:08 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]:   from numpy import show_config as show_numpy_config
Dec 09 16:03:08 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'influx'
Dec 09 16:03:08 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'insights'
Dec 09 16:03:08 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'iostat'
Dec 09 16:03:08 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'k8sevents'
Dec 09 16:03:08 compute-0 podman[75598]: 2025-12-09 16:03:08.785454049 +0000 UTC m=+0.064654844 container create a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5 (image=quay.io/ceph/ceph:v20, name=determined_hawking, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:08 compute-0 systemd[1]: Started libpod-conmon-a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5.scope.
Dec 09 16:03:08 compute-0 podman[75598]: 2025-12-09 16:03:08.754681232 +0000 UTC m=+0.033882087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:08 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9f9d1655ef2508655ab2edb8236c5730a76cf306fc684c0d092b030d276680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9f9d1655ef2508655ab2edb8236c5730a76cf306fc684c0d092b030d276680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9f9d1655ef2508655ab2edb8236c5730a76cf306fc684c0d092b030d276680/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:08 compute-0 podman[75598]: 2025-12-09 16:03:08.886079358 +0000 UTC m=+0.165280143 container init a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5 (image=quay.io/ceph/ceph:v20, name=determined_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:08 compute-0 podman[75598]: 2025-12-09 16:03:08.895739373 +0000 UTC m=+0.174940138 container start a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5 (image=quay.io/ceph/ceph:v20, name=determined_hawking, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:08 compute-0 podman[75598]: 2025-12-09 16:03:08.899316395 +0000 UTC m=+0.178517180 container attach a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5 (image=quay.io/ceph/ceph:v20, name=determined_hawking, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 16:03:09 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1943951740' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:09 compute-0 determined_hawking[75615]: 
Dec 09 16:03:09 compute-0 determined_hawking[75615]: {
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "health": {
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "status": "HEALTH_OK",
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "checks": {},
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "mutes": []
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     },
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "election_epoch": 5,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "quorum": [
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         0
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     ],
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "quorum_names": [
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "compute-0"
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     ],
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "quorum_age": 4,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "monmap": {
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "epoch": 1,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "min_mon_release_name": "tentacle",
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_mons": 1
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     },
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "osdmap": {
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "epoch": 1,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_osds": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_up_osds": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "osd_up_since": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_in_osds": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "osd_in_since": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_remapped_pgs": 0
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     },
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "pgmap": {
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "pgs_by_state": [],
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_pgs": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_pools": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_objects": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "data_bytes": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "bytes_used": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "bytes_avail": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "bytes_total": 0
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     },
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "fsmap": {
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "epoch": 1,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "btime": "2025-12-09T16:03:01:781860+0000",
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "by_rank": [],
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "up:standby": 0
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     },
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "mgrmap": {
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "available": false,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "num_standbys": 0,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "modules": [
Dec 09 16:03:09 compute-0 determined_hawking[75615]:             "iostat",
Dec 09 16:03:09 compute-0 determined_hawking[75615]:             "nfs"
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         ],
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "services": {}
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     },
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "servicemap": {
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "epoch": 1,
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "modified": "2025-12-09T16:03:01.783688+0000",
Dec 09 16:03:09 compute-0 determined_hawking[75615]:         "services": {}
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     },
Dec 09 16:03:09 compute-0 determined_hawking[75615]:     "progress_events": {}
Dec 09 16:03:09 compute-0 determined_hawking[75615]: }
Dec 09 16:03:09 compute-0 systemd[1]: libpod-a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5.scope: Deactivated successfully.
Dec 09 16:03:09 compute-0 podman[75598]: 2025-12-09 16:03:09.113226294 +0000 UTC m=+0.392427099 container died a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5 (image=quay.io/ceph/ceph:v20, name=determined_hawking, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:03:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc9f9d1655ef2508655ab2edb8236c5730a76cf306fc684c0d092b030d276680-merged.mount: Deactivated successfully.
Dec 09 16:03:09 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1943951740' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:09 compute-0 podman[75598]: 2025-12-09 16:03:09.1551824 +0000 UTC m=+0.434383175 container remove a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5 (image=quay.io/ceph/ceph:v20, name=determined_hawking, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:03:09 compute-0 systemd[1]: libpod-conmon-a4ca642bf2445e3b665905f52d0aaf3330684fd6c96e0221c8059407c7005fa5.scope: Deactivated successfully.
Dec 09 16:03:09 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'localpool'
Dec 09 16:03:09 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'mds_autoscaler'
Dec 09 16:03:09 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'mirroring'
Dec 09 16:03:09 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'nfs'
Dec 09 16:03:09 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'orchestrator'
Dec 09 16:03:10 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'osd_perf_query'
Dec 09 16:03:10 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'osd_support'
Dec 09 16:03:10 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'pg_autoscaler'
Dec 09 16:03:10 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'progress'
Dec 09 16:03:10 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'prometheus'
Dec 09 16:03:10 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'rbd_support'
Dec 09 16:03:10 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'rgw'
Dec 09 16:03:11 compute-0 podman[75653]: 2025-12-09 16:03:11.241952833 +0000 UTC m=+0.055289488 container create c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26 (image=quay.io/ceph/ceph:v20, name=dazzling_mahavira, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:11 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'rook'
Dec 09 16:03:11 compute-0 systemd[1]: Started libpod-conmon-c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26.scope.
Dec 09 16:03:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff93fe54600545c89fd28ef1e42d6ba0aea7871a2e0a8b351acdffa1040f042b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff93fe54600545c89fd28ef1e42d6ba0aea7871a2e0a8b351acdffa1040f042b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff93fe54600545c89fd28ef1e42d6ba0aea7871a2e0a8b351acdffa1040f042b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:11 compute-0 podman[75653]: 2025-12-09 16:03:11.304825885 +0000 UTC m=+0.118162520 container init c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26 (image=quay.io/ceph/ceph:v20, name=dazzling_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 09 16:03:11 compute-0 podman[75653]: 2025-12-09 16:03:11.216190608 +0000 UTC m=+0.029527243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:11 compute-0 podman[75653]: 2025-12-09 16:03:11.313669827 +0000 UTC m=+0.127006442 container start c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26 (image=quay.io/ceph/ceph:v20, name=dazzling_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:11 compute-0 podman[75653]: 2025-12-09 16:03:11.318353291 +0000 UTC m=+0.131689906 container attach c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26 (image=quay.io/ceph/ceph:v20, name=dazzling_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 16:03:11 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594426500' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]: 
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]: {
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "health": {
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "status": "HEALTH_OK",
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "checks": {},
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "mutes": []
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     },
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "election_epoch": 5,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "quorum": [
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         0
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     ],
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "quorum_names": [
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "compute-0"
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     ],
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "quorum_age": 7,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "monmap": {
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "epoch": 1,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "min_mon_release_name": "tentacle",
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_mons": 1
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     },
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "osdmap": {
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "epoch": 1,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_osds": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_up_osds": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "osd_up_since": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_in_osds": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "osd_in_since": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_remapped_pgs": 0
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     },
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "pgmap": {
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "pgs_by_state": [],
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_pgs": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_pools": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_objects": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "data_bytes": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "bytes_used": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "bytes_avail": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "bytes_total": 0
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     },
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "fsmap": {
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "epoch": 1,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "btime": "2025-12-09T16:03:01:781860+0000",
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "by_rank": [],
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "up:standby": 0
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     },
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "mgrmap": {
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "available": false,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "num_standbys": 0,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "modules": [
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:             "iostat",
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:             "nfs"
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         ],
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "services": {}
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     },
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "servicemap": {
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "epoch": 1,
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "modified": "2025-12-09T16:03:01.783688+0000",
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:         "services": {}
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     },
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]:     "progress_events": {}
Dec 09 16:03:11 compute-0 dazzling_mahavira[75669]: }
Dec 09 16:03:11 compute-0 systemd[1]: libpod-c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26.scope: Deactivated successfully.
Dec 09 16:03:11 compute-0 podman[75653]: 2025-12-09 16:03:11.515778749 +0000 UTC m=+0.329115364 container died c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26 (image=quay.io/ceph/ceph:v20, name=dazzling_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff93fe54600545c89fd28ef1e42d6ba0aea7871a2e0a8b351acdffa1040f042b-merged.mount: Deactivated successfully.
Dec 09 16:03:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2594426500' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:11 compute-0 podman[75653]: 2025-12-09 16:03:11.553779253 +0000 UTC m=+0.367115868 container remove c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26 (image=quay.io/ceph/ceph:v20, name=dazzling_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:03:11 compute-0 systemd[1]: libpod-conmon-c2ccd2ca9f8499494bfe7e09f48182f68fdab3f13974c15cd082b74bfd570c26.scope: Deactivated successfully.
Dec 09 16:03:11 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'selftest'
Dec 09 16:03:11 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'smb'
Dec 09 16:03:12 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'snap_schedule'
Dec 09 16:03:12 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'stats'
Dec 09 16:03:12 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'status'
Dec 09 16:03:12 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'telegraf'
Dec 09 16:03:12 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'telemetry'
Dec 09 16:03:12 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'test_orchestrator'
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'volumes'
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: ms_deliver_dispatch: unhandled message 0x55e9d069f860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ysegzv
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr handle_mgr_map Activating!
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr handle_mgr_map I am now activating
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ysegzv(active, starting, since 0.0112613s)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mds metadata"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e1 all = 1
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ysegzv", "id": "compute-0.ysegzv"} v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mgr metadata", "who": "compute-0.ysegzv", "id": "compute-0.ysegzv"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: balancer
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Manager daemon compute-0.ysegzv is now available
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: crash
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [balancer INFO root] Starting
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:03:13
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [balancer INFO root] No pools available
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: devicehealth
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [devicehealth INFO root] Starting
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: iostat
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: nfs
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: orchestrator
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: pg_autoscaler
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: progress
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [progress INFO root] Loading...
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [progress INFO root] No stored events to load
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [progress INFO root] Loaded [] historic events
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [progress INFO root] Loaded OSDMap, ready.
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] recovery thread starting
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] starting setup
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: rbd_support
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: status
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: telemetry
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/mirror_snapshot_schedule"} v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/mirror_snapshot_schedule"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] PerfHandler: starting
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TaskHandler: starting
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/trash_purge_schedule"} v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/trash_purge_schedule"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: [rbd_support INFO root] setup complete
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: Activating manager daemon compute-0.ysegzv
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mgrmap e2: compute-0.ysegzv(active, starting, since 0.0112613s)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mds metadata"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mgr metadata", "who": "compute-0.ysegzv", "id": "compute-0.ysegzv"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: Manager daemon compute-0.ysegzv is now available
Dec 09 16:03:13 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/mirror_snapshot_schedule"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/trash_purge_schedule"} : dispatch
Dec 09 16:03:13 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:13 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: volumes
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:13 compute-0 podman[75786]: 2025-12-09 16:03:13.621601536 +0000 UTC m=+0.042831592 container create 40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e (image=quay.io/ceph/ceph:v20, name=wonderful_einstein, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 16:03:13 compute-0 systemd[1]: Started libpod-conmon-40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e.scope.
Dec 09 16:03:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac4af837f91683c45518e61d104f680959ae7a0f9de31d5c997c0bda7c15a9e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac4af837f91683c45518e61d104f680959ae7a0f9de31d5c997c0bda7c15a9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac4af837f91683c45518e61d104f680959ae7a0f9de31d5c997c0bda7c15a9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:13 compute-0 podman[75786]: 2025-12-09 16:03:13.602815181 +0000 UTC m=+0.024045257 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:13 compute-0 podman[75786]: 2025-12-09 16:03:13.711640343 +0000 UTC m=+0.132870429 container init 40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e (image=quay.io/ceph/ceph:v20, name=wonderful_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 09 16:03:13 compute-0 podman[75786]: 2025-12-09 16:03:13.71748809 +0000 UTC m=+0.138718156 container start 40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e (image=quay.io/ceph/ceph:v20, name=wonderful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:03:13 compute-0 podman[75786]: 2025-12-09 16:03:13.727796004 +0000 UTC m=+0.149026060 container attach 40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e (image=quay.io/ceph/ceph:v20, name=wonderful_einstein, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 16:03:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198971244' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]: 
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]: {
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "health": {
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "status": "HEALTH_OK",
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "checks": {},
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "mutes": []
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     },
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "election_epoch": 5,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "quorum": [
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         0
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     ],
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "quorum_names": [
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "compute-0"
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     ],
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "quorum_age": 9,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "monmap": {
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "epoch": 1,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "min_mon_release_name": "tentacle",
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_mons": 1
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     },
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "osdmap": {
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "epoch": 1,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_osds": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_up_osds": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "osd_up_since": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_in_osds": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "osd_in_since": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_remapped_pgs": 0
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     },
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "pgmap": {
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "pgs_by_state": [],
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_pgs": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_pools": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_objects": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "data_bytes": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "bytes_used": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "bytes_avail": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "bytes_total": 0
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     },
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "fsmap": {
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "epoch": 1,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "btime": "2025-12-09T16:03:01:781860+0000",
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "by_rank": [],
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "up:standby": 0
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     },
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "mgrmap": {
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "available": false,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "num_standbys": 0,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "modules": [
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:             "iostat",
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:             "nfs"
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         ],
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "services": {}
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     },
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "servicemap": {
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "epoch": 1,
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "modified": "2025-12-09T16:03:01.783688+0000",
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:         "services": {}
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     },
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]:     "progress_events": {}
Dec 09 16:03:13 compute-0 wonderful_einstein[75802]: }
Dec 09 16:03:13 compute-0 systemd[1]: libpod-40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e.scope: Deactivated successfully.
Dec 09 16:03:13 compute-0 podman[75786]: 2025-12-09 16:03:13.987848968 +0000 UTC m=+0.409079024 container died 40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e (image=quay.io/ceph/ceph:v20, name=wonderful_einstein, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ac4af837f91683c45518e61d104f680959ae7a0f9de31d5c997c0bda7c15a9e-merged.mount: Deactivated successfully.
Dec 09 16:03:14 compute-0 podman[75786]: 2025-12-09 16:03:14.020289223 +0000 UTC m=+0.441519279 container remove 40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e (image=quay.io/ceph/ceph:v20, name=wonderful_einstein, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:14 compute-0 systemd[1]: libpod-conmon-40849686ed810d38c50f0ec5c8ef1405d3ca7ff2440592475557ffcc46816a6e.scope: Deactivated successfully.
Dec 09 16:03:14 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ysegzv(active, since 1.02427s)
Dec 09 16:03:14 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:14 compute-0 ceph-mon[75222]: from='mgr.14102 192.168.122.100:0/839702511' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:14 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/198971244' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:14 compute-0 ceph-mon[75222]: mgrmap e3: compute-0.ysegzv(active, since 1.02427s)
Dec 09 16:03:15 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:15 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:15 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ysegzv(active, since 2s)
Dec 09 16:03:16 compute-0 podman[75842]: 2025-12-09 16:03:16.089937787 +0000 UTC m=+0.045624281 container create c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e (image=quay.io/ceph/ceph:v20, name=mystifying_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:16 compute-0 systemd[1]: Started libpod-conmon-c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e.scope.
Dec 09 16:03:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:16 compute-0 podman[75842]: 2025-12-09 16:03:16.070183454 +0000 UTC m=+0.025869968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdade036b924c7ec65b668dc6c761874fb2bc0c0601b9954a2bb643ac75742a4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdade036b924c7ec65b668dc6c761874fb2bc0c0601b9954a2bb643ac75742a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdade036b924c7ec65b668dc6c761874fb2bc0c0601b9954a2bb643ac75742a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:16 compute-0 podman[75842]: 2025-12-09 16:03:16.181767315 +0000 UTC m=+0.137453819 container init c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e (image=quay.io/ceph/ceph:v20, name=mystifying_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 16:03:16 compute-0 podman[75842]: 2025-12-09 16:03:16.188573999 +0000 UTC m=+0.144260483 container start c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e (image=quay.io/ceph/ceph:v20, name=mystifying_tesla, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:16 compute-0 podman[75842]: 2025-12-09 16:03:16.192396588 +0000 UTC m=+0.148083152 container attach c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e (image=quay.io/ceph/ceph:v20, name=mystifying_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:03:16 compute-0 ceph-mon[75222]: mgrmap e4: compute-0.ysegzv(active, since 2s)
Dec 09 16:03:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 16:03:16 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404453440' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]: 
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]: {
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "health": {
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "status": "HEALTH_OK",
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "checks": {},
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "mutes": []
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     },
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "election_epoch": 5,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "quorum": [
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         0
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     ],
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "quorum_names": [
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "compute-0"
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     ],
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "quorum_age": 12,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "monmap": {
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "epoch": 1,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "min_mon_release_name": "tentacle",
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_mons": 1
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     },
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "osdmap": {
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "epoch": 1,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_osds": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_up_osds": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "osd_up_since": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_in_osds": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "osd_in_since": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_remapped_pgs": 0
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     },
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "pgmap": {
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "pgs_by_state": [],
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_pgs": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_pools": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_objects": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "data_bytes": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "bytes_used": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "bytes_avail": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "bytes_total": 0
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     },
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "fsmap": {
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "epoch": 1,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "btime": "2025-12-09T16:03:01:781860+0000",
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "by_rank": [],
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "up:standby": 0
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     },
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "mgrmap": {
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "available": true,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "num_standbys": 0,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "modules": [
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:             "iostat",
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:             "nfs"
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         ],
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "services": {}
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     },
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "servicemap": {
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "epoch": 1,
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "modified": "2025-12-09T16:03:01.783688+0000",
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:         "services": {}
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     },
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]:     "progress_events": {}
Dec 09 16:03:16 compute-0 mystifying_tesla[75858]: }
Dec 09 16:03:16 compute-0 systemd[1]: libpod-c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e.scope: Deactivated successfully.
Dec 09 16:03:16 compute-0 podman[75842]: 2025-12-09 16:03:16.755339658 +0000 UTC m=+0.711026142 container died c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e (image=quay.io/ceph/ceph:v20, name=mystifying_tesla, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdade036b924c7ec65b668dc6c761874fb2bc0c0601b9954a2bb643ac75742a4-merged.mount: Deactivated successfully.
Dec 09 16:03:16 compute-0 podman[75842]: 2025-12-09 16:03:16.791883929 +0000 UTC m=+0.747570413 container remove c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e (image=quay.io/ceph/ceph:v20, name=mystifying_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:16 compute-0 systemd[1]: libpod-conmon-c966f2396392da7f02b1bf5a1b68057312b9f5b90c1bde029fd2f2e7b3fc421e.scope: Deactivated successfully.
Dec 09 16:03:16 compute-0 podman[75895]: 2025-12-09 16:03:16.866928389 +0000 UTC m=+0.049919034 container create ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a (image=quay.io/ceph/ceph:v20, name=great_haibt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:16 compute-0 systemd[1]: Started libpod-conmon-ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a.scope.
Dec 09 16:03:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0590ea1d9e981e519a023d9af2a378fa8dfdc95c392079edb7761e2e3fa61c80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0590ea1d9e981e519a023d9af2a378fa8dfdc95c392079edb7761e2e3fa61c80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0590ea1d9e981e519a023d9af2a378fa8dfdc95c392079edb7761e2e3fa61c80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0590ea1d9e981e519a023d9af2a378fa8dfdc95c392079edb7761e2e3fa61c80/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:16 compute-0 podman[75895]: 2025-12-09 16:03:16.845826467 +0000 UTC m=+0.028817112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:16 compute-0 podman[75895]: 2025-12-09 16:03:16.941652295 +0000 UTC m=+0.124642910 container init ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a (image=quay.io/ceph/ceph:v20, name=great_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:16 compute-0 podman[75895]: 2025-12-09 16:03:16.952873712 +0000 UTC m=+0.135864377 container start ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a (image=quay.io/ceph/ceph:v20, name=great_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:16 compute-0 podman[75895]: 2025-12-09 16:03:16.957654488 +0000 UTC m=+0.140645243 container attach ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a (image=quay.io/ceph/ceph:v20, name=great_haibt, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:17 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 09 16:03:17 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/696184256' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 09 16:03:17 compute-0 great_haibt[75912]: 
Dec 09 16:03:17 compute-0 great_haibt[75912]: [global]
Dec 09 16:03:17 compute-0 great_haibt[75912]:         fsid = 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:17 compute-0 great_haibt[75912]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 09 16:03:17 compute-0 great_haibt[75912]:         osd_crush_chooseleaf_type = 0
Dec 09 16:03:17 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:17 compute-0 systemd[1]: libpod-ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a.scope: Deactivated successfully.
Dec 09 16:03:17 compute-0 podman[75895]: 2025-12-09 16:03:17.365396033 +0000 UTC m=+0.548386688 container died ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a (image=quay.io/ceph/ceph:v20, name=great_haibt, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0590ea1d9e981e519a023d9af2a378fa8dfdc95c392079edb7761e2e3fa61c80-merged.mount: Deactivated successfully.
Dec 09 16:03:17 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3404453440' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 09 16:03:17 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/696184256' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 09 16:03:17 compute-0 podman[75895]: 2025-12-09 16:03:17.412249913 +0000 UTC m=+0.595240538 container remove ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a (image=quay.io/ceph/ceph:v20, name=great_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:17 compute-0 systemd[1]: libpod-conmon-ee37926380c8de186748ca404c9248b7124cfea0c521dc5474858b554dacd36a.scope: Deactivated successfully.
Dec 09 16:03:17 compute-0 podman[75949]: 2025-12-09 16:03:17.486333453 +0000 UTC m=+0.049481067 container create bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4 (image=quay.io/ceph/ceph:v20, name=clever_curran, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:17 compute-0 systemd[1]: Started libpod-conmon-bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4.scope.
Dec 09 16:03:17 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7daa088c16cf4bacdae4e1ead7c19b184ba8c889ded48d529920ce30b462dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7daa088c16cf4bacdae4e1ead7c19b184ba8c889ded48d529920ce30b462dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7daa088c16cf4bacdae4e1ead7c19b184ba8c889ded48d529920ce30b462dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:17 compute-0 podman[75949]: 2025-12-09 16:03:17.465599686 +0000 UTC m=+0.028747310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:17 compute-0 podman[75949]: 2025-12-09 16:03:17.561908751 +0000 UTC m=+0.125056385 container init bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4 (image=quay.io/ceph/ceph:v20, name=clever_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 09 16:03:17 compute-0 podman[75949]: 2025-12-09 16:03:17.568885379 +0000 UTC m=+0.132032993 container start bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4 (image=quay.io/ceph/ceph:v20, name=clever_curran, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:03:17 compute-0 podman[75949]: 2025-12-09 16:03:17.572223658 +0000 UTC m=+0.135371272 container attach bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4 (image=quay.io/ceph/ceph:v20, name=clever_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:03:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 09 16:03:18 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2245787453' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 09 16:03:18 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2245787453' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 09 16:03:18 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2245787453' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  1: '-n'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  2: 'mgr.compute-0.ysegzv'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  3: '-f'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  4: '--setuser'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  5: 'ceph'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  6: '--setgroup'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  7: 'ceph'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  8: '--default-log-to-file=false'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  9: '--default-log-to-journald=true'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr respawn  exe_path /proc/self/exe
Dec 09 16:03:18 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ysegzv(active, since 5s)
Dec 09 16:03:18 compute-0 systemd[1]: libpod-bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4.scope: Deactivated successfully.
Dec 09 16:03:18 compute-0 podman[75949]: 2025-12-09 16:03:18.457100473 +0000 UTC m=+1.020248097 container died bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4 (image=quay.io/ceph/ceph:v20, name=clever_curran, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e7daa088c16cf4bacdae4e1ead7c19b184ba8c889ded48d529920ce30b462dc-merged.mount: Deactivated successfully.
Dec 09 16:03:18 compute-0 podman[75949]: 2025-12-09 16:03:18.503149777 +0000 UTC m=+1.066297391 container remove bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4 (image=quay.io/ceph/ceph:v20, name=clever_curran, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:03:18 compute-0 systemd[1]: libpod-conmon-bc308cf339e2ec5a4ec3825a7ebbc529c507dfdc439cfcad1b53007d48d0aee4.scope: Deactivated successfully.
Dec 09 16:03:18 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: ignoring --setuser ceph since I am not root
Dec 09 16:03:18 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: ignoring --setgroup ceph since I am not root
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: pidfile_write: ignore empty --pid-file
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'alerts'
Dec 09 16:03:18 compute-0 podman[76002]: 2025-12-09 16:03:18.579422698 +0000 UTC m=+0.052520076 container create 42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c (image=quay.io/ceph/ceph:v20, name=stoic_perlman, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:18 compute-0 systemd[1]: Started libpod-conmon-42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c.scope.
Dec 09 16:03:18 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8054a4d491265ac60bc55a49b18bbfb52307ada2ae693b11d0ea3a3eeb5c893/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8054a4d491265ac60bc55a49b18bbfb52307ada2ae693b11d0ea3a3eeb5c893/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8054a4d491265ac60bc55a49b18bbfb52307ada2ae693b11d0ea3a3eeb5c893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:18 compute-0 podman[76002]: 2025-12-09 16:03:18.64285848 +0000 UTC m=+0.115955918 container init 42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c (image=quay.io/ceph/ceph:v20, name=stoic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:03:18 compute-0 podman[76002]: 2025-12-09 16:03:18.649844838 +0000 UTC m=+0.122942226 container start 42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c (image=quay.io/ceph/ceph:v20, name=stoic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 09 16:03:18 compute-0 podman[76002]: 2025-12-09 16:03:18.558785874 +0000 UTC m=+0.031883272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:18 compute-0 podman[76002]: 2025-12-09 16:03:18.655836434 +0000 UTC m=+0.128933822 container attach 42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c (image=quay.io/ceph/ceph:v20, name=stoic_perlman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'balancer'
Dec 09 16:03:18 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'cephadm'
Dec 09 16:03:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 09 16:03:19 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352616275' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 09 16:03:19 compute-0 stoic_perlman[76038]: {
Dec 09 16:03:19 compute-0 stoic_perlman[76038]:     "epoch": 5,
Dec 09 16:03:19 compute-0 stoic_perlman[76038]:     "available": true,
Dec 09 16:03:19 compute-0 stoic_perlman[76038]:     "active_name": "compute-0.ysegzv",
Dec 09 16:03:19 compute-0 stoic_perlman[76038]:     "num_standby": 0
Dec 09 16:03:19 compute-0 stoic_perlman[76038]: }
Dec 09 16:03:19 compute-0 systemd[1]: libpod-42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c.scope: Deactivated successfully.
Dec 09 16:03:19 compute-0 podman[76002]: 2025-12-09 16:03:19.177255941 +0000 UTC m=+0.650353319 container died 42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c (image=quay.io/ceph/ceph:v20, name=stoic_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8054a4d491265ac60bc55a49b18bbfb52307ada2ae693b11d0ea3a3eeb5c893-merged.mount: Deactivated successfully.
Dec 09 16:03:19 compute-0 podman[76002]: 2025-12-09 16:03:19.22193438 +0000 UTC m=+0.695031758 container remove 42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c (image=quay.io/ceph/ceph:v20, name=stoic_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 09 16:03:19 compute-0 systemd[1]: libpod-conmon-42d77a27a6e323dcd35ba58396fd57274be39da3eaa634807b0fa72c5f0b040c.scope: Deactivated successfully.
Dec 09 16:03:19 compute-0 podman[76087]: 2025-12-09 16:03:19.296398212 +0000 UTC m=+0.050455489 container create c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb (image=quay.io/ceph/ceph:v20, name=cranky_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:03:19 compute-0 systemd[1]: Started libpod-conmon-c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb.scope.
Dec 09 16:03:19 compute-0 podman[76087]: 2025-12-09 16:03:19.275649355 +0000 UTC m=+0.029706672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:19 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93703ca3101efc89869b95f2b82c8ff71855b278511d66d98169ce2d73ac1b30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93703ca3101efc89869b95f2b82c8ff71855b278511d66d98169ce2d73ac1b30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93703ca3101efc89869b95f2b82c8ff71855b278511d66d98169ce2d73ac1b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:19 compute-0 podman[76087]: 2025-12-09 16:03:19.409800126 +0000 UTC m=+0.163857433 container init c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb (image=quay.io/ceph/ceph:v20, name=cranky_lamarr, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:19 compute-0 podman[76087]: 2025-12-09 16:03:19.418970545 +0000 UTC m=+0.173027822 container start c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb (image=quay.io/ceph/ceph:v20, name=cranky_lamarr, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:03:19 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2245787453' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 09 16:03:19 compute-0 ceph-mon[75222]: mgrmap e5: compute-0.ysegzv(active, since 5s)
Dec 09 16:03:19 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/352616275' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 09 16:03:19 compute-0 podman[76087]: 2025-12-09 16:03:19.433939114 +0000 UTC m=+0.187996431 container attach c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb (image=quay.io/ceph/ceph:v20, name=cranky_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:03:19 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'crash'
Dec 09 16:03:19 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'dashboard'
Dec 09 16:03:20 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'devicehealth'
Dec 09 16:03:20 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'diskprediction_local'
Dec 09 16:03:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 09 16:03:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 09 16:03:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]:   from numpy import show_config as show_numpy_config
Dec 09 16:03:20 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'influx'
Dec 09 16:03:20 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'insights'
Dec 09 16:03:20 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'iostat'
Dec 09 16:03:20 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'k8sevents'
Dec 09 16:03:21 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'localpool'
Dec 09 16:03:21 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'mds_autoscaler'
Dec 09 16:03:21 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'mirroring'
Dec 09 16:03:21 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'nfs'
Dec 09 16:03:21 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'orchestrator'
Dec 09 16:03:22 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'osd_perf_query'
Dec 09 16:03:22 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'osd_support'
Dec 09 16:03:22 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'pg_autoscaler'
Dec 09 16:03:22 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'progress'
Dec 09 16:03:22 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'prometheus'
Dec 09 16:03:22 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'rbd_support'
Dec 09 16:03:22 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'rgw'
Dec 09 16:03:23 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'rook'
Dec 09 16:03:23 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'selftest'
Dec 09 16:03:23 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'smb'
Dec 09 16:03:24 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'snap_schedule'
Dec 09 16:03:24 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'stats'
Dec 09 16:03:24 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'status'
Dec 09 16:03:24 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'telegraf'
Dec 09 16:03:24 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'telemetry'
Dec 09 16:03:24 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'test_orchestrator'
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: mgr[py] Loading python module 'volumes'
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ysegzv restarted
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ysegzv
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: ms_deliver_dispatch: unhandled message 0x558fd0982000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: mgr handle_mgr_map Activating!
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: mgr handle_mgr_map I am now activating
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ysegzv(active, starting, since 0.520272s)
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ysegzv", "id": "compute-0.ysegzv"} v 0)
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mgr metadata", "who": "compute-0.ysegzv", "id": "compute-0.ysegzv"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mds metadata"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e1 all = 1
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: balancer
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Starting
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Manager daemon compute-0.ysegzv is now available
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:03:25
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:03:25 compute-0 ceph-mgr[75515]: [balancer INFO root] No pools available
Dec 09 16:03:25 compute-0 ceph-mon[75222]: Active manager daemon compute-0.ysegzv restarted
Dec 09 16:03:25 compute-0 ceph-mon[75222]: Activating manager daemon compute-0.ysegzv
Dec 09 16:03:25 compute-0 ceph-mon[75222]: osdmap e2: 0 total, 0 up, 0 in
Dec 09 16:03:25 compute-0 ceph-mon[75222]: mgrmap e6: compute-0.ysegzv(active, starting, since 0.520272s)
Dec 09 16:03:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mgr metadata", "who": "compute-0.ysegzv", "id": "compute-0.ysegzv"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mds metadata"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata"} : dispatch
Dec 09 16:03:25 compute-0 ceph-mon[75222]: Manager daemon compute-0.ysegzv is now available
Dec 09 16:03:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 09 16:03:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: cephadm
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: crash
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: devicehealth
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [devicehealth INFO root] Starting
Dec 09 16:03:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: iostat
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: nfs
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: orchestrator
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: pg_autoscaler
Dec 09 16:03:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: progress
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [progress INFO root] Loading...
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [progress INFO root] No stored events to load
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [progress INFO root] Loaded [] historic events
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [progress INFO root] Loaded OSDMap, ready.
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] recovery thread starting
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] starting setup
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: rbd_support
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: status
Dec 09 16:03:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/mirror_snapshot_schedule"} v 0)
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/mirror_snapshot_schedule"} : dispatch
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: telemetry
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] PerfHandler: starting
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TaskHandler: starting
Dec 09 16:03:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/trash_purge_schedule"} v 0)
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/trash_purge_schedule"} : dispatch
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] setup complete
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: mgr load Constructed class from module: volumes
Dec 09 16:03:26 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ysegzv(active, since 1.5288s)
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 09 16:03:26 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 09 16:03:26 compute-0 cranky_lamarr[76104]: {
Dec 09 16:03:26 compute-0 cranky_lamarr[76104]:     "mgrmap_epoch": 7,
Dec 09 16:03:26 compute-0 cranky_lamarr[76104]:     "initialized": true
Dec 09 16:03:26 compute-0 cranky_lamarr[76104]: }
Dec 09 16:03:26 compute-0 systemd[1]: libpod-c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb.scope: Deactivated successfully.
Dec 09 16:03:26 compute-0 podman[76087]: 2025-12-09 16:03:26.887762378 +0000 UTC m=+7.641819645 container died c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb (image=quay.io/ceph/ceph:v20, name=cranky_lamarr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 16:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-93703ca3101efc89869b95f2b82c8ff71855b278511d66d98169ce2d73ac1b30-merged.mount: Deactivated successfully.
Dec 09 16:03:26 compute-0 podman[76087]: 2025-12-09 16:03:26.919869926 +0000 UTC m=+7.673927193 container remove c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb (image=quay.io/ceph/ceph:v20, name=cranky_lamarr, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:03:26 compute-0 systemd[1]: libpod-conmon-c73121a0c9bf7ba05c0ee7c20be9ffe9a00ee2bd57c6d3b23d1ff07c0fb277eb.scope: Deactivated successfully.
Dec 09 16:03:26 compute-0 podman[76251]: 2025-12-09 16:03:26.993908004 +0000 UTC m=+0.049813508 container create ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab (image=quay.io/ceph/ceph:v20, name=distracted_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:27 compute-0 systemd[1]: Started libpod-conmon-ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab.scope.
Dec 09 16:03:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bec36edb96e0cdefa4ab66502c44f8554e758f70d9b162b54fa54712224be5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bec36edb96e0cdefa4ab66502c44f8554e758f70d9b162b54fa54712224be5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2bec36edb96e0cdefa4ab66502c44f8554e758f70d9b162b54fa54712224be5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:27 compute-0 podman[76251]: 2025-12-09 16:03:26.975237534 +0000 UTC m=+0.031143058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:27 compute-0 podman[76251]: 2025-12-09 16:03:27.072412218 +0000 UTC m=+0.128317722 container init ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab (image=quay.io/ceph/ceph:v20, name=distracted_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:03:27 compute-0 podman[76251]: 2025-12-09 16:03:27.079671495 +0000 UTC m=+0.135576989 container start ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab (image=quay.io/ceph/ceph:v20, name=distracted_herschel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 16:03:27 compute-0 podman[76251]: 2025-12-09 16:03:27.083901803 +0000 UTC m=+0.139807327 container attach ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab (image=quay.io/ceph/ceph:v20, name=distracted_herschel, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:27 compute-0 ceph-mon[75222]: Found migration_current of "None". Setting to last migration.
Dec 09 16:03:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/mirror_snapshot_schedule"} : dispatch
Dec 09 16:03:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ysegzv/trash_purge_schedule"} : dispatch
Dec 09 16:03:27 compute-0 ceph-mon[75222]: mgrmap e7: compute-0.ysegzv(active, since 1.5288s)
Dec 09 16:03:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec 09 16:03:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4249457514' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 09 16:03:27 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4249457514' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 09 16:03:27 compute-0 distracted_herschel[76268]: module 'orchestrator' is already enabled (always-on)
Dec 09 16:03:27 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ysegzv(active, since 2s)
Dec 09 16:03:27 compute-0 systemd[1]: libpod-ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab.scope: Deactivated successfully.
Dec 09 16:03:27 compute-0 podman[76251]: 2025-12-09 16:03:27.887424233 +0000 UTC m=+0.943329737 container died ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab (image=quay.io/ceph/ceph:v20, name=distracted_herschel, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2bec36edb96e0cdefa4ab66502c44f8554e758f70d9b162b54fa54712224be5-merged.mount: Deactivated successfully.
Dec 09 16:03:27 compute-0 podman[76251]: 2025-12-09 16:03:27.921673572 +0000 UTC m=+0.977579056 container remove ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab (image=quay.io/ceph/ceph:v20, name=distracted_herschel, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:03:27 compute-0 systemd[1]: libpod-conmon-ef2ca4ac60dde4f585723e138a9fcfb6125d5ea958223dbd8da039508e9763ab.scope: Deactivated successfully.
Dec 09 16:03:28 compute-0 podman[76306]: 2025-12-09 16:03:27.999472282 +0000 UTC m=+0.056175895 container create 01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342 (image=quay.io/ceph/ceph:v20, name=festive_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:03:28 compute-0 systemd[1]: Started libpod-conmon-01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342.scope.
Dec 09 16:03:28 compute-0 podman[76306]: 2025-12-09 16:03:27.969014638 +0000 UTC m=+0.025718241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bad5f6544b14e226bae296e392cddfc33041abad6abe7f3c06617bf1c3cbc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bad5f6544b14e226bae296e392cddfc33041abad6abe7f3c06617bf1c3cbc1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bad5f6544b14e226bae296e392cddfc33041abad6abe7f3c06617bf1c3cbc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:28 compute-0 podman[76306]: 2025-12-09 16:03:28.091636572 +0000 UTC m=+0.148340145 container init 01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342 (image=quay.io/ceph/ceph:v20, name=festive_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:03:28 compute-0 podman[76306]: 2025-12-09 16:03:28.097514764 +0000 UTC m=+0.154218347 container start 01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342 (image=quay.io/ceph/ceph:v20, name=festive_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:28 compute-0 podman[76306]: 2025-12-09 16:03:28.101529365 +0000 UTC m=+0.158232958 container attach 01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342 (image=quay.io/ceph/ceph:v20, name=festive_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:28 compute-0 ceph-mon[75222]: from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 09 16:03:28 compute-0 ceph-mon[75222]: from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 09 16:03:28 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4249457514' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 09 16:03:28 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4249457514' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 09 16:03:28 compute-0 ceph-mon[75222]: mgrmap e8: compute-0.ysegzv(active, since 2s)
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 09 16:03:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 16:03:28 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:28 compute-0 systemd[1]: libpod-01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342.scope: Deactivated successfully.
Dec 09 16:03:28 compute-0 podman[76306]: 2025-12-09 16:03:28.57568801 +0000 UTC m=+0.632391583 container died 01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342 (image=quay.io/ceph/ceph:v20, name=festive_chaum, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-64bad5f6544b14e226bae296e392cddfc33041abad6abe7f3c06617bf1c3cbc1-merged.mount: Deactivated successfully.
Dec 09 16:03:28 compute-0 podman[76306]: 2025-12-09 16:03:28.609041829 +0000 UTC m=+0.665745402 container remove 01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342 (image=quay.io/ceph/ceph:v20, name=festive_chaum, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:03:28 compute-0 systemd[1]: libpod-conmon-01c522bfd04b99c8c202786e8a7bfad8d827ba68af25437df08c5a721f965342.scope: Deactivated successfully.
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: [cephadm INFO cherrypy.error] [09/Dec/2025:16:03:28] ENGINE Bus STARTING
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : [09/Dec/2025:16:03:28] ENGINE Bus STARTING
Dec 09 16:03:28 compute-0 podman[76360]: 2025-12-09 16:03:28.672294604 +0000 UTC m=+0.042677744 container create aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb (image=quay.io/ceph/ceph:v20, name=objective_franklin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:03:28 compute-0 systemd[1]: Started libpod-conmon-aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb.scope.
Dec 09 16:03:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:28 compute-0 podman[76360]: 2025-12-09 16:03:28.654564045 +0000 UTC m=+0.024947185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfd90588872bed59cef70b2a3a7ee66677874228e4305fbca1810ea14f97407/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfd90588872bed59cef70b2a3a7ee66677874228e4305fbca1810ea14f97407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfd90588872bed59cef70b2a3a7ee66677874228e4305fbca1810ea14f97407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: [cephadm INFO cherrypy.error] [09/Dec/2025:16:03:28] ENGINE Serving on http://192.168.122.100:8765
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : [09/Dec/2025:16:03:28] ENGINE Serving on http://192.168.122.100:8765
Dec 09 16:03:28 compute-0 podman[76360]: 2025-12-09 16:03:28.762264842 +0000 UTC m=+0.132648012 container init aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb (image=quay.io/ceph/ceph:v20, name=objective_franklin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:28 compute-0 podman[76360]: 2025-12-09 16:03:28.768417213 +0000 UTC m=+0.138800353 container start aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb (image=quay.io/ceph/ceph:v20, name=objective_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:28 compute-0 podman[76360]: 2025-12-09 16:03:28.771497934 +0000 UTC m=+0.141881074 container attach aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb (image=quay.io/ceph/ceph:v20, name=objective_franklin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: [cephadm INFO cherrypy.error] [09/Dec/2025:16:03:28] ENGINE Serving on https://192.168.122.100:7150
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : [09/Dec/2025:16:03:28] ENGINE Serving on https://192.168.122.100:7150
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: [cephadm INFO cherrypy.error] [09/Dec/2025:16:03:28] ENGINE Bus STARTED
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : [09/Dec/2025:16:03:28] ENGINE Bus STARTED
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: [cephadm INFO cherrypy.error] [09/Dec/2025:16:03:28] ENGINE Client ('192.168.122.100', 35374) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 16:03:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 16:03:28 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:28 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : [09/Dec/2025:16:03:28] ENGINE Client ('192.168.122.100', 35374) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 09 16:03:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: [cephadm INFO root] Set ssh ssh_user
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 09 16:03:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 09 16:03:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: [cephadm INFO root] Set ssh ssh_config
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 09 16:03:29 compute-0 objective_franklin[76386]: ssh user set to ceph-admin. sudo will be used
Dec 09 16:03:29 compute-0 systemd[1]: libpod-aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb.scope: Deactivated successfully.
Dec 09 16:03:29 compute-0 podman[76360]: 2025-12-09 16:03:29.197426062 +0000 UTC m=+0.567809222 container died aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb (image=quay.io/ceph/ceph:v20, name=objective_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbfd90588872bed59cef70b2a3a7ee66677874228e4305fbca1810ea14f97407-merged.mount: Deactivated successfully.
Dec 09 16:03:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019897735 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:03:29 compute-0 podman[76360]: 2025-12-09 16:03:29.242617788 +0000 UTC m=+0.613000938 container remove aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb (image=quay.io/ceph/ceph:v20, name=objective_franklin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:29 compute-0 systemd[1]: libpod-conmon-aec054c0b046650b7c7a5f758c4b61640440cb916163f2283ae829b28bc0dabb.scope: Deactivated successfully.
Dec 09 16:03:29 compute-0 podman[76436]: 2025-12-09 16:03:29.30084523 +0000 UTC m=+0.039606435 container create bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17 (image=quay.io/ceph/ceph:v20, name=suspicious_napier, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:29 compute-0 systemd[1]: Started libpod-conmon-bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17.scope.
Dec 09 16:03:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e77eae58bc7a6c88391fa9686e8026113c30fd93cf102643b07084b868ed716/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e77eae58bc7a6c88391fa9686e8026113c30fd93cf102643b07084b868ed716/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e77eae58bc7a6c88391fa9686e8026113c30fd93cf102643b07084b868ed716/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e77eae58bc7a6c88391fa9686e8026113c30fd93cf102643b07084b868ed716/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e77eae58bc7a6c88391fa9686e8026113c30fd93cf102643b07084b868ed716/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 podman[76436]: 2025-12-09 16:03:29.365210772 +0000 UTC m=+0.103971987 container init bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17 (image=quay.io/ceph/ceph:v20, name=suspicious_napier, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 16:03:29 compute-0 podman[76436]: 2025-12-09 16:03:29.371934101 +0000 UTC m=+0.110695296 container start bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17 (image=quay.io/ceph/ceph:v20, name=suspicious_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 09 16:03:29 compute-0 podman[76436]: 2025-12-09 16:03:29.375505028 +0000 UTC m=+0.114266263 container attach bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17 (image=quay.io/ceph/ceph:v20, name=suspicious_napier, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:29 compute-0 podman[76436]: 2025-12-09 16:03:29.279689179 +0000 UTC m=+0.018450404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:29 compute-0 ceph-mon[75222]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:29 compute-0 ceph-mon[75222]: [09/Dec/2025:16:03:28] ENGINE Bus STARTING
Dec 09 16:03:29 compute-0 ceph-mon[75222]: [09/Dec/2025:16:03:28] ENGINE Serving on http://192.168.122.100:8765
Dec 09 16:03:29 compute-0 ceph-mon[75222]: [09/Dec/2025:16:03:28] ENGINE Serving on https://192.168.122.100:7150
Dec 09 16:03:29 compute-0 ceph-mon[75222]: [09/Dec/2025:16:03:28] ENGINE Bus STARTED
Dec 09 16:03:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:29 compute-0 ceph-mon[75222]: [09/Dec/2025:16:03:28] ENGINE Client ('192.168.122.100', 35374) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 16:03:29 compute-0 ceph-mon[75222]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:29 compute-0 ceph-mon[75222]: Set ssh ssh_user
Dec 09 16:03:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:29 compute-0 ceph-mon[75222]: Set ssh ssh_config
Dec 09 16:03:29 compute-0 ceph-mon[75222]: ssh user set to ceph-admin. sudo will be used
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 09 16:03:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: [cephadm INFO root] Set ssh private key
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 09 16:03:29 compute-0 systemd[1]: libpod-bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17.scope: Deactivated successfully.
Dec 09 16:03:29 compute-0 podman[76479]: 2025-12-09 16:03:29.832349647 +0000 UTC m=+0.024798821 container died bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17 (image=quay.io/ceph/ceph:v20, name=suspicious_napier, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e77eae58bc7a6c88391fa9686e8026113c30fd93cf102643b07084b868ed716-merged.mount: Deactivated successfully.
Dec 09 16:03:29 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:29 compute-0 podman[76479]: 2025-12-09 16:03:29.864376143 +0000 UTC m=+0.056825307 container remove bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17 (image=quay.io/ceph/ceph:v20, name=suspicious_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:03:29 compute-0 systemd[1]: libpod-conmon-bde34cab8c298596f4f5c79226df2d851f1e06a3897bf8a6c09b6b390bfbdc17.scope: Deactivated successfully.
Dec 09 16:03:29 compute-0 podman[76494]: 2025-12-09 16:03:29.936834659 +0000 UTC m=+0.045197417 container create 6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed (image=quay.io/ceph/ceph:v20, name=compassionate_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Dec 09 16:03:29 compute-0 systemd[1]: Started libpod-conmon-6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed.scope.
Dec 09 16:03:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f5f85ae778e7784659ed825a100a2155910204f8c220c8330318479190d069/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f5f85ae778e7784659ed825a100a2155910204f8c220c8330318479190d069/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f5f85ae778e7784659ed825a100a2155910204f8c220c8330318479190d069/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f5f85ae778e7784659ed825a100a2155910204f8c220c8330318479190d069/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f5f85ae778e7784659ed825a100a2155910204f8c220c8330318479190d069/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:30 compute-0 podman[76494]: 2025-12-09 16:03:30.006313358 +0000 UTC m=+0.114676146 container init 6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed (image=quay.io/ceph/ceph:v20, name=compassionate_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:30 compute-0 podman[76494]: 2025-12-09 16:03:29.917455376 +0000 UTC m=+0.025818164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:30 compute-0 podman[76494]: 2025-12-09 16:03:30.015221049 +0000 UTC m=+0.123583817 container start 6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed (image=quay.io/ceph/ceph:v20, name=compassionate_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:03:30 compute-0 podman[76494]: 2025-12-09 16:03:30.018549387 +0000 UTC m=+0.126912185 container attach 6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed (image=quay.io/ceph/ceph:v20, name=compassionate_chaum, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:30 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 09 16:03:30 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:30 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:30 compute-0 ceph-mgr[75515]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 09 16:03:30 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 09 16:03:30 compute-0 systemd[1]: libpod-6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed.scope: Deactivated successfully.
Dec 09 16:03:30 compute-0 podman[76494]: 2025-12-09 16:03:30.665999521 +0000 UTC m=+0.774362339 container died 6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed (image=quay.io/ceph/ceph:v20, name=compassionate_chaum, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Dec 09 16:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f5f85ae778e7784659ed825a100a2155910204f8c220c8330318479190d069-merged.mount: Deactivated successfully.
Dec 09 16:03:30 compute-0 podman[76494]: 2025-12-09 16:03:30.717217763 +0000 UTC m=+0.825580531 container remove 6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed (image=quay.io/ceph/ceph:v20, name=compassionate_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:30 compute-0 systemd[1]: libpod-conmon-6be5bbc8fe7c148ea18a333e4ca0928ee36f32fdc5a76518f970ed44c247c9ed.scope: Deactivated successfully.
Dec 09 16:03:30 compute-0 ceph-mon[75222]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:30 compute-0 ceph-mon[75222]: Set ssh ssh_identity_key
Dec 09 16:03:30 compute-0 ceph-mon[75222]: Set ssh private key
Dec 09 16:03:30 compute-0 ceph-mon[75222]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:30 compute-0 podman[76550]: 2025-12-09 16:03:30.814731358 +0000 UTC m=+0.062269635 container create 6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648 (image=quay.io/ceph/ceph:v20, name=zealous_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:30 compute-0 systemd[1]: Started libpod-conmon-6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648.scope.
Dec 09 16:03:30 compute-0 podman[76550]: 2025-12-09 16:03:30.791003383 +0000 UTC m=+0.038541750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19048557c15da1fc4ed66e64c3151aed0f14f1541265adb11962578d8805765f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19048557c15da1fc4ed66e64c3151aed0f14f1541265adb11962578d8805765f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19048557c15da1fc4ed66e64c3151aed0f14f1541265adb11962578d8805765f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:30 compute-0 podman[76550]: 2025-12-09 16:03:30.920192092 +0000 UTC m=+0.167730479 container init 6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648 (image=quay.io/ceph/ceph:v20, name=zealous_mirzakhani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:30 compute-0 podman[76550]: 2025-12-09 16:03:30.930454727 +0000 UTC m=+0.177993004 container start 6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648 (image=quay.io/ceph/ceph:v20, name=zealous_mirzakhani, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:30 compute-0 podman[76550]: 2025-12-09 16:03:30.934066335 +0000 UTC m=+0.181604612 container attach 6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648 (image=quay.io/ceph/ceph:v20, name=zealous_mirzakhani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:31 compute-0 sshd-session[76543]: Invalid user admin from 146.190.31.45 port 38312
Dec 09 16:03:31 compute-0 sshd-session[76543]: Connection closed by invalid user admin 146.190.31.45 port 38312 [preauth]
Dec 09 16:03:31 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:31 compute-0 zealous_mirzakhani[76567]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYDV3v4IP87FfTcRwI5w/5/O3Voc2CEcxNEF/67VWn64FwDpWreKOw2ZHFVk8QI+l1vLmLbaDx5wLQwTJcp+c/DHURisCXf8zdVMU2gHSwHJBNAxatvZaQeGMR72TjWH87IO9h1leByqFFawH9zftcJFg2SaRcYD+pNZ0FEodAUC9buN7gFyrimvcYuDlhJ2O4L/ZS9Mr8Zg8ITmIgeyEeSiapeV213jVknd1cs/kT/XuwwWUOp4MYZStHOXLk2v4eNW4yQ9kNFQ/IrLN6MWA+18Tg31DdUbMjemTFQrOgJnHlDo/Y1GkMNdAPXQq3ySWmyRxfncrSt2gJjOJ3aq7sVPj6LBNhlVkPE+OKT8KG+WWCAdxMXlLQc1gXh0foAxuVPiR3V9L2B1QslGwYbIT4Mp0TzSB/3/XH7rz6atD9PxwH+gHIauBeoNLDQ8uwh4Osd1D9kfXQXkjInSSlewAPSgfa+tSIez44EfkgLBo1tWOQGaoeQygeokDgBByDsy8= zuul@controller
Dec 09 16:03:31 compute-0 systemd[1]: libpod-6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648.scope: Deactivated successfully.
Dec 09 16:03:31 compute-0 podman[76550]: 2025-12-09 16:03:31.443614375 +0000 UTC m=+0.691152702 container died 6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648 (image=quay.io/ceph/ceph:v20, name=zealous_mirzakhani, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 16:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-19048557c15da1fc4ed66e64c3151aed0f14f1541265adb11962578d8805765f-merged.mount: Deactivated successfully.
Dec 09 16:03:31 compute-0 podman[76550]: 2025-12-09 16:03:31.482997901 +0000 UTC m=+0.730536168 container remove 6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648 (image=quay.io/ceph/ceph:v20, name=zealous_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:31 compute-0 systemd[1]: libpod-conmon-6d3cb5fff4ed060df5298798e7bd1886f2155b1fc701e0e38e3e1b54fcf0b648.scope: Deactivated successfully.
Dec 09 16:03:31 compute-0 podman[76605]: 2025-12-09 16:03:31.553495493 +0000 UTC m=+0.045004700 container create 77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d (image=quay.io/ceph/ceph:v20, name=bold_ishizaka, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:03:31 compute-0 systemd[1]: Started libpod-conmon-77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d.scope.
Dec 09 16:03:31 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d630f328d502f80f3ebccf9756ebd0dc9f21da99df9f310c4233ac0e7fa4aa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d630f328d502f80f3ebccf9756ebd0dc9f21da99df9f310c4233ac0e7fa4aa6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d630f328d502f80f3ebccf9756ebd0dc9f21da99df9f310c4233ac0e7fa4aa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:31 compute-0 podman[76605]: 2025-12-09 16:03:31.618345931 +0000 UTC m=+0.109855158 container init 77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d (image=quay.io/ceph/ceph:v20, name=bold_ishizaka, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:03:31 compute-0 podman[76605]: 2025-12-09 16:03:31.625191695 +0000 UTC m=+0.116700902 container start 77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d (image=quay.io/ceph/ceph:v20, name=bold_ishizaka, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 09 16:03:31 compute-0 podman[76605]: 2025-12-09 16:03:31.628659018 +0000 UTC m=+0.120168245 container attach 77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d (image=quay.io/ceph/ceph:v20, name=bold_ishizaka, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 16:03:31 compute-0 podman[76605]: 2025-12-09 16:03:31.535819286 +0000 UTC m=+0.027328513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:31 compute-0 ceph-mon[75222]: Set ssh ssh_identity_pub
Dec 09 16:03:31 compute-0 ceph-mon[75222]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:31 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:32 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:32 compute-0 sshd-session[76647]: Accepted publickey for ceph-admin from 192.168.122.100 port 41498 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:32 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 09 16:03:32 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 09 16:03:32 compute-0 systemd-logind[786]: New session 21 of user ceph-admin.
Dec 09 16:03:32 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 09 16:03:32 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 09 16:03:32 compute-0 systemd[76651]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:32 compute-0 systemd[76651]: Queued start job for default target Main User Target.
Dec 09 16:03:32 compute-0 sshd-session[76665]: Accepted publickey for ceph-admin from 192.168.122.100 port 41500 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:32 compute-0 systemd[76651]: Created slice User Application Slice.
Dec 09 16:03:32 compute-0 systemd[76651]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 09 16:03:32 compute-0 systemd[76651]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 16:03:32 compute-0 systemd[76651]: Reached target Paths.
Dec 09 16:03:32 compute-0 systemd[76651]: Reached target Timers.
Dec 09 16:03:32 compute-0 systemd-logind[786]: New session 23 of user ceph-admin.
Dec 09 16:03:32 compute-0 systemd[76651]: Starting D-Bus User Message Bus Socket...
Dec 09 16:03:32 compute-0 systemd[76651]: Starting Create User's Volatile Files and Directories...
Dec 09 16:03:32 compute-0 systemd[76651]: Listening on D-Bus User Message Bus Socket.
Dec 09 16:03:32 compute-0 systemd[76651]: Reached target Sockets.
Dec 09 16:03:32 compute-0 systemd[76651]: Finished Create User's Volatile Files and Directories.
Dec 09 16:03:32 compute-0 systemd[76651]: Reached target Basic System.
Dec 09 16:03:32 compute-0 systemd[76651]: Reached target Main User Target.
Dec 09 16:03:32 compute-0 systemd[76651]: Startup finished in 132ms.
Dec 09 16:03:32 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 09 16:03:32 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:32 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 09 16:03:32 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 09 16:03:32 compute-0 sshd-session[76647]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:32 compute-0 sshd-session[76665]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:32 compute-0 sudo[76672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:32 compute-0 sudo[76672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:32 compute-0 sudo[76672]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:32 compute-0 ceph-mon[75222]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:32 compute-0 sshd-session[76697]: Accepted publickey for ceph-admin from 192.168.122.100 port 41512 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:32 compute-0 systemd-logind[786]: New session 24 of user ceph-admin.
Dec 09 16:03:32 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 09 16:03:32 compute-0 sshd-session[76697]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:32 compute-0 sudo[76701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 09 16:03:32 compute-0 sudo[76701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:32 compute-0 sudo[76701]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:33 compute-0 sshd-session[76726]: Accepted publickey for ceph-admin from 192.168.122.100 port 41526 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:33 compute-0 systemd-logind[786]: New session 25 of user ceph-admin.
Dec 09 16:03:33 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 09 16:03:33 compute-0 sshd-session[76726]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:33 compute-0 sudo[76730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 09 16:03:33 compute-0 sudo[76730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:33 compute-0 sudo[76730]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:33 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 09 16:03:33 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 09 16:03:33 compute-0 sshd-session[76755]: Accepted publickey for ceph-admin from 192.168.122.100 port 41530 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:33 compute-0 systemd-logind[786]: New session 26 of user ceph-admin.
Dec 09 16:03:33 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 09 16:03:33 compute-0 sshd-session[76755]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:33 compute-0 sudo[76759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:33 compute-0 sudo[76759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:33 compute-0 sudo[76759]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:33 compute-0 ceph-mon[75222]: Deploying cephadm binary to compute-0
Dec 09 16:03:33 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:33 compute-0 sshd-session[76784]: Accepted publickey for ceph-admin from 192.168.122.100 port 41542 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:33 compute-0 systemd-logind[786]: New session 27 of user ceph-admin.
Dec 09 16:03:33 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 09 16:03:33 compute-0 sshd-session[76784]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:34 compute-0 sudo[76788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:34 compute-0 sudo[76788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:34 compute-0 sudo[76788]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052519 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:03:34 compute-0 sshd-session[76813]: Accepted publickey for ceph-admin from 192.168.122.100 port 41556 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:34 compute-0 systemd-logind[786]: New session 28 of user ceph-admin.
Dec 09 16:03:34 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 09 16:03:34 compute-0 sshd-session[76813]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:34 compute-0 sudo[76817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 09 16:03:34 compute-0 sudo[76817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:34 compute-0 sudo[76817]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:34 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:34 compute-0 sshd-session[76842]: Accepted publickey for ceph-admin from 192.168.122.100 port 41572 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:34 compute-0 systemd-logind[786]: New session 29 of user ceph-admin.
Dec 09 16:03:34 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 09 16:03:34 compute-0 sshd-session[76842]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:34 compute-0 sudo[76846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:34 compute-0 sudo[76846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:34 compute-0 sudo[76846]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:35 compute-0 sshd-session[76871]: Accepted publickey for ceph-admin from 192.168.122.100 port 41588 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:35 compute-0 systemd-logind[786]: New session 30 of user ceph-admin.
Dec 09 16:03:35 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 09 16:03:35 compute-0 sshd-session[76871]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:35 compute-0 sudo[76875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 09 16:03:35 compute-0 sudo[76875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:35 compute-0 sudo[76875]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:35 compute-0 sshd-session[76900]: Accepted publickey for ceph-admin from 192.168.122.100 port 42402 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:35 compute-0 systemd-logind[786]: New session 31 of user ceph-admin.
Dec 09 16:03:35 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 09 16:03:35 compute-0 sshd-session[76900]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:35 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:36 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:36 compute-0 sshd-session[76927]: Accepted publickey for ceph-admin from 192.168.122.100 port 42404 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:36 compute-0 systemd-logind[786]: New session 32 of user ceph-admin.
Dec 09 16:03:37 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 09 16:03:37 compute-0 sshd-session[76927]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:37 compute-0 sudo[76931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 09 16:03:37 compute-0 sudo[76931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:37 compute-0 sudo[76931]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:37 compute-0 sshd-session[76956]: Accepted publickey for ceph-admin from 192.168.122.100 port 42416 ssh2: RSA SHA256:DlqNSpo6KpBjEu6NiZ2P0IfyXTkos7Tmh3ZqDK3rMJs
Dec 09 16:03:37 compute-0 systemd-logind[786]: New session 33 of user ceph-admin.
Dec 09 16:03:37 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 09 16:03:37 compute-0 sshd-session[76956]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 16:03:37 compute-0 sudo[76960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 09 16:03:37 compute-0 sudo[76960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:37 compute-0 sudo[76960]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 16:03:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:37 compute-0 ceph-mgr[75515]: [cephadm INFO root] Added host compute-0
Dec 09 16:03:37 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 09 16:03:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 16:03:37 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:37 compute-0 bold_ishizaka[76621]: Added host 'compute-0' with addr '192.168.122.100'
Dec 09 16:03:37 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:37 compute-0 systemd[1]: libpod-77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d.scope: Deactivated successfully.
Dec 09 16:03:37 compute-0 sudo[77007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:37 compute-0 sudo[77007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:37 compute-0 podman[77019]: 2025-12-09 16:03:37.901552457 +0000 UTC m=+0.029985690 container died 77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d (image=quay.io/ceph/ceph:v20, name=bold_ishizaka, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:03:37 compute-0 sudo[77007]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d630f328d502f80f3ebccf9756ebd0dc9f21da99df9f310c4233ac0e7fa4aa6-merged.mount: Deactivated successfully.
Dec 09 16:03:37 compute-0 podman[77019]: 2025-12-09 16:03:37.940392836 +0000 UTC m=+0.068826039 container remove 77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d (image=quay.io/ceph/ceph:v20, name=bold_ishizaka, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:37 compute-0 systemd[1]: libpod-conmon-77a8c5fe812a31015c258d5004b0538a81fccb6bea53f5aead9d7943de5ded2d.scope: Deactivated successfully.
Dec 09 16:03:37 compute-0 sudo[77046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Dec 09 16:03:37 compute-0 sudo[77046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:38 compute-0 podman[77069]: 2025-12-09 16:03:38.01890854 +0000 UTC m=+0.047480202 container create 5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171 (image=quay.io/ceph/ceph:v20, name=optimistic_hawking, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:38 compute-0 systemd[1]: Started libpod-conmon-5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171.scope.
Dec 09 16:03:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:38 compute-0 podman[77069]: 2025-12-09 16:03:37.99594086 +0000 UTC m=+0.024512552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f7f2e887ad39a88cc15a47685338214b40ae3c938c7fed658ffe467a64ad8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f7f2e887ad39a88cc15a47685338214b40ae3c938c7fed658ffe467a64ad8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f7f2e887ad39a88cc15a47685338214b40ae3c938c7fed658ffe467a64ad8e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:38 compute-0 podman[77069]: 2025-12-09 16:03:38.116742925 +0000 UTC m=+0.145314607 container init 5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171 (image=quay.io/ceph/ceph:v20, name=optimistic_hawking, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:38 compute-0 podman[77069]: 2025-12-09 16:03:38.125429159 +0000 UTC m=+0.154000831 container start 5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171 (image=quay.io/ceph/ceph:v20, name=optimistic_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:38 compute-0 podman[77069]: 2025-12-09 16:03:38.12883915 +0000 UTC m=+0.157410832 container attach 5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171 (image=quay.io/ceph/ceph:v20, name=optimistic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:03:38 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:38 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:38 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 09 16:03:38 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 09 16:03:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 16:03:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:38 compute-0 optimistic_hawking[77087]: Scheduled mon update...
Dec 09 16:03:38 compute-0 systemd[1]: libpod-5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171.scope: Deactivated successfully.
Dec 09 16:03:38 compute-0 podman[77069]: 2025-12-09 16:03:38.568939962 +0000 UTC m=+0.597511634 container died 5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171 (image=quay.io/ceph/ceph:v20, name=optimistic_hawking, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-07f7f2e887ad39a88cc15a47685338214b40ae3c938c7fed658ffe467a64ad8e-merged.mount: Deactivated successfully.
Dec 09 16:03:38 compute-0 podman[77069]: 2025-12-09 16:03:38.609734434 +0000 UTC m=+0.638306086 container remove 5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171 (image=quay.io/ceph/ceph:v20, name=optimistic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:38 compute-0 systemd[1]: libpod-conmon-5c63a6c32ec5bedfcc2dbefa1b6e015f023df3530d899c005c6d4ce91ee5c171.scope: Deactivated successfully.
Dec 09 16:03:38 compute-0 podman[77151]: 2025-12-09 16:03:38.677945952 +0000 UTC m=+0.046817740 container create e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904 (image=quay.io/ceph/ceph:v20, name=upbeat_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:38 compute-0 systemd[1]: Started libpod-conmon-e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904.scope.
Dec 09 16:03:38 compute-0 podman[77151]: 2025-12-09 16:03:38.654328711 +0000 UTC m=+0.023200549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:38 compute-0 podman[77104]: 2025-12-09 16:03:38.754279945 +0000 UTC m=+0.519173406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b912ee9fde696eb096548797e482653130950538842c2f337106ef48d781f103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b912ee9fde696eb096548797e482653130950538842c2f337106ef48d781f103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b912ee9fde696eb096548797e482653130950538842c2f337106ef48d781f103/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:38 compute-0 podman[77151]: 2025-12-09 16:03:38.769419299 +0000 UTC m=+0.138291117 container init e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904 (image=quay.io/ceph/ceph:v20, name=upbeat_gagarin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 09 16:03:38 compute-0 podman[77151]: 2025-12-09 16:03:38.780851902 +0000 UTC m=+0.149723690 container start e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904 (image=quay.io/ceph/ceph:v20, name=upbeat_gagarin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:03:38 compute-0 podman[77151]: 2025-12-09 16:03:38.78476141 +0000 UTC m=+0.153633258 container attach e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904 (image=quay.io/ceph/ceph:v20, name=upbeat_gagarin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:03:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:38 compute-0 ceph-mon[75222]: Added host compute-0
Dec 09 16:03:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:03:38 compute-0 ceph-mon[75222]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:38 compute-0 ceph-mon[75222]: Saving service mon spec with placement count:5
Dec 09 16:03:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:38 compute-0 podman[77186]: 2025-12-09 16:03:38.894023808 +0000 UTC m=+0.057901912 container create 808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86 (image=quay.io/ceph/ceph:v20, name=recursing_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:03:38 compute-0 systemd[1]: Started libpod-conmon-808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86.scope.
Dec 09 16:03:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:38 compute-0 podman[77186]: 2025-12-09 16:03:38.870662735 +0000 UTC m=+0.034540889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:38 compute-0 podman[77186]: 2025-12-09 16:03:38.97155474 +0000 UTC m=+0.135432904 container init 808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86 (image=quay.io/ceph/ceph:v20, name=recursing_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 16:03:38 compute-0 podman[77186]: 2025-12-09 16:03:38.976627706 +0000 UTC m=+0.140505860 container start 808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86 (image=quay.io/ceph/ceph:v20, name=recursing_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:38 compute-0 podman[77186]: 2025-12-09 16:03:38.980296416 +0000 UTC m=+0.144174530 container attach 808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86 (image=quay.io/ceph/ceph:v20, name=recursing_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:39 compute-0 recursing_cartwright[77221]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 09 16:03:39 compute-0 systemd[1]: libpod-808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86.scope: Deactivated successfully.
Dec 09 16:03:39 compute-0 podman[77186]: 2025-12-09 16:03:39.098603009 +0000 UTC m=+0.262481113 container died 808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86 (image=quay.io/ceph/ceph:v20, name=recursing_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:03:39 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:39 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 09 16:03:39 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 09 16:03:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 16:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1980e37b0ed8833ce8ca05c85bab2b373680cce42220c7e3ef9cb9b202bff64a-merged.mount: Deactivated successfully.
Dec 09 16:03:39 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:39 compute-0 upbeat_gagarin[77167]: Scheduled mgr update...
Dec 09 16:03:39 compute-0 podman[77186]: 2025-12-09 16:03:39.234549159 +0000 UTC m=+0.398427273 container remove 808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86 (image=quay.io/ceph/ceph:v20, name=recursing_cartwright, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:39 compute-0 systemd[1]: libpod-e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904.scope: Deactivated successfully.
Dec 09 16:03:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054700 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:03:39 compute-0 podman[77151]: 2025-12-09 16:03:39.244482653 +0000 UTC m=+0.613354441 container died e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904 (image=quay.io/ceph/ceph:v20, name=upbeat_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:39 compute-0 systemd[1]: libpod-conmon-808ccd54d64b3e82582f3c1718704aa0a3be27fe2499d1f85c9cbaafe808dd86.scope: Deactivated successfully.
Dec 09 16:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b912ee9fde696eb096548797e482653130950538842c2f337106ef48d781f103-merged.mount: Deactivated successfully.
Dec 09 16:03:39 compute-0 podman[77151]: 2025-12-09 16:03:39.283152006 +0000 UTC m=+0.652023794 container remove e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904 (image=quay.io/ceph/ceph:v20, name=upbeat_gagarin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:03:39 compute-0 sudo[77046]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:39 compute-0 systemd[1]: libpod-conmon-e6cb766e38acba13c91332ad5d0e35653a5588c190762110efc482265eaaa904.scope: Deactivated successfully.
Dec 09 16:03:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 09 16:03:39 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:39 compute-0 podman[77253]: 2025-12-09 16:03:39.354417333 +0000 UTC m=+0.048462014 container create 1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10 (image=quay.io/ceph/ceph:v20, name=happy_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:39 compute-0 systemd[1]: Started libpod-conmon-1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10.scope.
Dec 09 16:03:39 compute-0 sudo[77264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:39 compute-0 sudo[77264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:39 compute-0 sudo[77264]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666a606b83678c7c515a3d72fe0836642ebbf2d8134324745fd0096581bb70b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666a606b83678c7c515a3d72fe0836642ebbf2d8134324745fd0096581bb70b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666a606b83678c7c515a3d72fe0836642ebbf2d8134324745fd0096581bb70b1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:39 compute-0 podman[77253]: 2025-12-09 16:03:39.336434516 +0000 UTC m=+0.030479217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:39 compute-0 podman[77253]: 2025-12-09 16:03:39.439286365 +0000 UTC m=+0.133331076 container init 1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10 (image=quay.io/ceph/ceph:v20, name=happy_kapitsa, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:39 compute-0 podman[77253]: 2025-12-09 16:03:39.445398114 +0000 UTC m=+0.139442805 container start 1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10 (image=quay.io/ceph/ceph:v20, name=happy_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:39 compute-0 podman[77253]: 2025-12-09 16:03:39.44987408 +0000 UTC m=+0.143918751 container attach 1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10 (image=quay.io/ceph/ceph:v20, name=happy_kapitsa, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Dec 09 16:03:39 compute-0 sudo[77297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 09 16:03:39 compute-0 sudo[77297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:39 compute-0 sudo[77297]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:39 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:39 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:39 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service crash spec with placement *
Dec 09 16:03:39 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 09 16:03:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 16:03:39 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:39 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:39 compute-0 happy_kapitsa[77293]: Scheduled crash update...
Dec 09 16:03:39 compute-0 systemd[1]: libpod-1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10.scope: Deactivated successfully.
Dec 09 16:03:39 compute-0 podman[77253]: 2025-12-09 16:03:39.884070399 +0000 UTC m=+0.578115080 container died 1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10 (image=quay.io/ceph/ceph:v20, name=happy_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:39 compute-0 sudo[77363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:39 compute-0 sudo[77363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:39 compute-0 sudo[77363]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-666a606b83678c7c515a3d72fe0836642ebbf2d8134324745fd0096581bb70b1-merged.mount: Deactivated successfully.
Dec 09 16:03:39 compute-0 podman[77253]: 2025-12-09 16:03:39.923475656 +0000 UTC m=+0.617520337 container remove 1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10 (image=quay.io/ceph/ceph:v20, name=happy_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:39 compute-0 systemd[1]: libpod-conmon-1d390088032fe03452af998073be7ad7ff9b7d4ee5258c9156dc3788703f6a10.scope: Deactivated successfully.
Dec 09 16:03:39 compute-0 sudo[77397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:03:39 compute-0 sudo[77397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:39 compute-0 podman[77420]: 2025-12-09 16:03:39.986744902 +0000 UTC m=+0.040509084 container create 1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347 (image=quay.io/ceph/ceph:v20, name=trusting_pascal, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:03:40 compute-0 systemd[1]: Started libpod-conmon-1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347.scope.
Dec 09 16:03:40 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1904c184462ef901ef4db278bc92213dbb7758b8765f8d2ab38aeeaa864febda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1904c184462ef901ef4db278bc92213dbb7758b8765f8d2ab38aeeaa864febda/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1904c184462ef901ef4db278bc92213dbb7758b8765f8d2ab38aeeaa864febda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:40 compute-0 podman[77420]: 2025-12-09 16:03:39.967340678 +0000 UTC m=+0.021104890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:40 compute-0 podman[77420]: 2025-12-09 16:03:40.069102031 +0000 UTC m=+0.122866283 container init 1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347 (image=quay.io/ceph/ceph:v20, name=trusting_pascal, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:40 compute-0 podman[77420]: 2025-12-09 16:03:40.077872008 +0000 UTC m=+0.131636190 container start 1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347 (image=quay.io/ceph/ceph:v20, name=trusting_pascal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:03:40 compute-0 podman[77420]: 2025-12-09 16:03:40.086770458 +0000 UTC m=+0.140534690 container attach 1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347 (image=quay.io/ceph/ceph:v20, name=trusting_pascal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:40 compute-0 ceph-mon[75222]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:40 compute-0 ceph-mon[75222]: Saving service mgr spec with placement count:2
Dec 09 16:03:40 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:40 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:40 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:40 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:40 compute-0 podman[77512]: 2025-12-09 16:03:40.406681975 +0000 UTC m=+0.051772471 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:40 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 09 16:03:40 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4272836924' entity='client.admin' 
Dec 09 16:03:40 compute-0 systemd[1]: libpod-1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347.scope: Deactivated successfully.
Dec 09 16:03:40 compute-0 conmon[77444]: conmon 1d581d685aa58f389065 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347.scope/container/memory.events
Dec 09 16:03:40 compute-0 podman[77420]: 2025-12-09 16:03:40.542861283 +0000 UTC m=+0.596625495 container died 1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347 (image=quay.io/ceph/ceph:v20, name=trusting_pascal, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 09 16:03:40 compute-0 podman[77512]: 2025-12-09 16:03:40.54309483 +0000 UTC m=+0.188185306 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1904c184462ef901ef4db278bc92213dbb7758b8765f8d2ab38aeeaa864febda-merged.mount: Deactivated successfully.
Dec 09 16:03:40 compute-0 podman[77420]: 2025-12-09 16:03:40.599801562 +0000 UTC m=+0.653565744 container remove 1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347 (image=quay.io/ceph/ceph:v20, name=trusting_pascal, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:40 compute-0 systemd[1]: libpod-conmon-1d581d685aa58f389065c5a24d6744d7f6801778d03d318c7d51b53968f21347.scope: Deactivated successfully.
Dec 09 16:03:40 compute-0 podman[77568]: 2025-12-09 16:03:40.659991008 +0000 UTC m=+0.040147792 container create f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f (image=quay.io/ceph/ceph:v20, name=sweet_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:03:40 compute-0 systemd[1]: Started libpod-conmon-f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f.scope.
Dec 09 16:03:40 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8021435c7c0914b2824f9cfc555f483dc82e2550943af5ce8c4a46c0a29dc850/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8021435c7c0914b2824f9cfc555f483dc82e2550943af5ce8c4a46c0a29dc850/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8021435c7c0914b2824f9cfc555f483dc82e2550943af5ce8c4a46c0a29dc850/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:40 compute-0 podman[77568]: 2025-12-09 16:03:40.642441414 +0000 UTC m=+0.022598218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:40 compute-0 podman[77568]: 2025-12-09 16:03:40.744472676 +0000 UTC m=+0.124629490 container init f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f (image=quay.io/ceph/ceph:v20, name=sweet_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:40 compute-0 podman[77568]: 2025-12-09 16:03:40.749879513 +0000 UTC m=+0.130036297 container start f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f (image=quay.io/ceph/ceph:v20, name=sweet_franklin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:40 compute-0 podman[77568]: 2025-12-09 16:03:40.753078938 +0000 UTC m=+0.133235742 container attach f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f (image=quay.io/ceph/ceph:v20, name=sweet_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:40 compute-0 sudo[77397]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:40 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:40 compute-0 sudo[77643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:40 compute-0 sudo[77643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:40 compute-0 sudo[77643]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:41 compute-0 sudo[77668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:03:41 compute-0 sudo[77668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:41 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 09 16:03:41 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:41 compute-0 systemd[1]: libpod-f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f.scope: Deactivated successfully.
Dec 09 16:03:41 compute-0 podman[77568]: 2025-12-09 16:03:41.167440309 +0000 UTC m=+0.547597093 container died f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f (image=quay.io/ceph/ceph:v20, name=sweet_franklin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8021435c7c0914b2824f9cfc555f483dc82e2550943af5ce8c4a46c0a29dc850-merged.mount: Deactivated successfully.
Dec 09 16:03:41 compute-0 podman[77568]: 2025-12-09 16:03:41.214546398 +0000 UTC m=+0.594703232 container remove f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f (image=quay.io/ceph/ceph:v20, name=sweet_franklin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:03:41 compute-0 systemd[1]: libpod-conmon-f2394ce3025675d1d0237cf263a5fe3fe274daf62c99cfe61eb4b8513d79fc7f.scope: Deactivated successfully.
Dec 09 16:03:41 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77727 (sysctl)
Dec 09 16:03:41 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 09 16:03:41 compute-0 podman[77715]: 2025-12-09 16:03:41.275592141 +0000 UTC m=+0.037107373 container create f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45 (image=quay.io/ceph/ceph:v20, name=hopeful_hertz, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:03:41 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 09 16:03:41 compute-0 systemd[1]: Started libpod-conmon-f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45.scope.
Dec 09 16:03:41 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9614c6a5e0996e9f9c1bb5b346fa6b664ca9a2ec013e7a30e08e2d980b28aa0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9614c6a5e0996e9f9c1bb5b346fa6b664ca9a2ec013e7a30e08e2d980b28aa0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9614c6a5e0996e9f9c1bb5b346fa6b664ca9a2ec013e7a30e08e2d980b28aa0f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:41 compute-0 podman[77715]: 2025-12-09 16:03:41.258220374 +0000 UTC m=+0.019735626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:41 compute-0 podman[77715]: 2025-12-09 16:03:41.360027568 +0000 UTC m=+0.121542820 container init f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45 (image=quay.io/ceph/ceph:v20, name=hopeful_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:03:41 compute-0 podman[77715]: 2025-12-09 16:03:41.367993698 +0000 UTC m=+0.129508930 container start f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45 (image=quay.io/ceph/ceph:v20, name=hopeful_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:03:41 compute-0 podman[77715]: 2025-12-09 16:03:41.372519656 +0000 UTC m=+0.134034888 container attach f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45 (image=quay.io/ceph/ceph:v20, name=hopeful_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 09 16:03:41 compute-0 ceph-mon[75222]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:41 compute-0 ceph-mon[75222]: Saving service crash spec with placement *
Dec 09 16:03:41 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4272836924' entity='client.admin' 
Dec 09 16:03:41 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:41 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:41 compute-0 sudo[77668]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:41 compute-0 sudo[77782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:41 compute-0 sudo[77782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:41 compute-0 sudo[77782]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:41 compute-0 sudo[77807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 09 16:03:41 compute-0 sudo[77807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:41 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 16:03:41 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:41 compute-0 ceph-mgr[75515]: [cephadm INFO root] Added label _admin to host compute-0
Dec 09 16:03:41 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 09 16:03:41 compute-0 hopeful_hertz[77740]: Added label _admin to host compute-0
Dec 09 16:03:41 compute-0 systemd[1]: libpod-f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45.scope: Deactivated successfully.
Dec 09 16:03:41 compute-0 podman[77715]: 2025-12-09 16:03:41.827086321 +0000 UTC m=+0.588601553 container died f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45 (image=quay.io/ceph/ceph:v20, name=hopeful_hertz, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9614c6a5e0996e9f9c1bb5b346fa6b664ca9a2ec013e7a30e08e2d980b28aa0f-merged.mount: Deactivated successfully.
Dec 09 16:03:41 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:41 compute-0 podman[77715]: 2025-12-09 16:03:41.86839572 +0000 UTC m=+0.629910962 container remove f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45 (image=quay.io/ceph/ceph:v20, name=hopeful_hertz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:03:41 compute-0 systemd[1]: libpod-conmon-f246eb77407d1e76851e0a84d667a015ce06cdc05f016165cb2c3a5429c7da45.scope: Deactivated successfully.
Dec 09 16:03:41 compute-0 podman[77845]: 2025-12-09 16:03:41.94954324 +0000 UTC m=+0.053462387 container create 21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd (image=quay.io/ceph/ceph:v20, name=romantic_banach, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:41 compute-0 systemd[1]: Started libpod-conmon-21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd.scope.
Dec 09 16:03:41 compute-0 sudo[77807]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:42 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:42 compute-0 podman[77845]: 2025-12-09 16:03:41.926766356 +0000 UTC m=+0.030685543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8031d972cbe44cbebdb3ea0ceb88231c3b7d500b6e498b49f3f16a3a57dd905c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8031d972cbe44cbebdb3ea0ceb88231c3b7d500b6e498b49f3f16a3a57dd905c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8031d972cbe44cbebdb3ea0ceb88231c3b7d500b6e498b49f3f16a3a57dd905c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:42 compute-0 podman[77845]: 2025-12-09 16:03:42.033274034 +0000 UTC m=+0.137193201 container init 21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd (image=quay.io/ceph/ceph:v20, name=romantic_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:42 compute-0 podman[77845]: 2025-12-09 16:03:42.039186597 +0000 UTC m=+0.143105754 container start 21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd (image=quay.io/ceph/ceph:v20, name=romantic_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 09 16:03:42 compute-0 podman[77845]: 2025-12-09 16:03:42.041800693 +0000 UTC m=+0.145719870 container attach 21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd (image=quay.io/ceph/ceph:v20, name=romantic_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:42 compute-0 sudo[77882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:42 compute-0 sudo[77882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:42 compute-0 sudo[77882]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:42 compute-0 sudo[77909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- inventory --format=json-pretty --filter-for-batch
Dec 09 16:03:42 compute-0 sudo[77909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:42 compute-0 podman[77965]: 2025-12-09 16:03:42.462955486 +0000 UTC m=+0.055830174 container create e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:03:42 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:42 compute-0 systemd[1]: Started libpod-conmon-e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4.scope.
Dec 09 16:03:42 compute-0 ceph-mon[75222]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:42 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:42 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:42 compute-0 podman[77965]: 2025-12-09 16:03:42.434402844 +0000 UTC m=+0.027277602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:03:42 compute-0 podman[77965]: 2025-12-09 16:03:42.541959156 +0000 UTC m=+0.134833854 container init e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:03:42 compute-0 podman[77965]: 2025-12-09 16:03:42.549521373 +0000 UTC m=+0.142396061 container start e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dijkstra, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:03:42 compute-0 podman[77965]: 2025-12-09 16:03:42.553012277 +0000 UTC m=+0.145887045 container attach e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:03:42 compute-0 dreamy_dijkstra[77982]: 167 167
Dec 09 16:03:42 compute-0 podman[77965]: 2025-12-09 16:03:42.557358609 +0000 UTC m=+0.150233297 container died e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:03:42 compute-0 systemd[1]: libpod-e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4.scope: Deactivated successfully.
Dec 09 16:03:42 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 09 16:03:42 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4032196309' entity='client.admin' 
Dec 09 16:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe30fa3c62095085471150f721546a63470baa3cdbbfe30a921be2e9e85d2691-merged.mount: Deactivated successfully.
Dec 09 16:03:42 compute-0 romantic_banach[77879]: set mgr/dashboard/cluster/status
Dec 09 16:03:42 compute-0 podman[77965]: 2025-12-09 16:03:42.596696804 +0000 UTC m=+0.189571502 container remove e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dijkstra, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:42 compute-0 systemd[1]: libpod-conmon-e9f9033eae83b69b770eff00e7b6da56ebfac0a2e3a14beec6d2aa0d998315b4.scope: Deactivated successfully.
Dec 09 16:03:42 compute-0 podman[77845]: 2025-12-09 16:03:42.607796336 +0000 UTC m=+0.711715513 container died 21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd (image=quay.io/ceph/ceph:v20, name=romantic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:03:42 compute-0 systemd[1]: libpod-21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd.scope: Deactivated successfully.
Dec 09 16:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8031d972cbe44cbebdb3ea0ceb88231c3b7d500b6e498b49f3f16a3a57dd905c-merged.mount: Deactivated successfully.
Dec 09 16:03:42 compute-0 podman[77845]: 2025-12-09 16:03:42.648837276 +0000 UTC m=+0.752756433 container remove 21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd (image=quay.io/ceph/ceph:v20, name=romantic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:03:42 compute-0 systemd[1]: libpod-conmon-21fd8f89d5852610f006ea34b52d6b1bc16d3738abb9918498be81f684f910bd.scope: Deactivated successfully.
Dec 09 16:03:42 compute-0 systemd[1]: Reloading.
Dec 09 16:03:42 compute-0 systemd-rc-local-generator[78039]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:42 compute-0 systemd-sysv-generator[78044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:43 compute-0 sudo[74151]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:43 compute-0 podman[78059]: 2025-12-09 16:03:43.200637855 +0000 UTC m=+0.060779145 container create ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 16:03:43 compute-0 systemd[1]: Started libpod-conmon-ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e.scope.
Dec 09 16:03:43 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:43 compute-0 podman[78059]: 2025-12-09 16:03:43.182584466 +0000 UTC m=+0.042725786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac9966b1d92ee9487009ed19d71dc417697aa1b7b68000a3980f905f7cde8e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac9966b1d92ee9487009ed19d71dc417697aa1b7b68000a3980f905f7cde8e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac9966b1d92ee9487009ed19d71dc417697aa1b7b68000a3980f905f7cde8e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac9966b1d92ee9487009ed19d71dc417697aa1b7b68000a3980f905f7cde8e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:43 compute-0 podman[78059]: 2025-12-09 16:03:43.296461845 +0000 UTC m=+0.156603215 container init ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 09 16:03:43 compute-0 podman[78059]: 2025-12-09 16:03:43.311818586 +0000 UTC m=+0.171959896 container start ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:03:43 compute-0 podman[78059]: 2025-12-09 16:03:43.315974152 +0000 UTC m=+0.176115472 container attach ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:43 compute-0 sudo[78104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aquovzxxonircdlwkpwmhpfwfaxodnqt ; /usr/bin/python3'
Dec 09 16:03:43 compute-0 sudo[78104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:43 compute-0 ceph-mon[75222]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:43 compute-0 ceph-mon[75222]: Added label _admin to host compute-0
Dec 09 16:03:43 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4032196309' entity='client.admin' 
Dec 09 16:03:43 compute-0 python3[78106]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:03:43 compute-0 podman[78112]: 2025-12-09 16:03:43.746994238 +0000 UTC m=+0.061625524 container create 0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98 (image=quay.io/ceph/ceph:v20, name=eloquent_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 09 16:03:43 compute-0 systemd[1]: Started libpod-conmon-0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98.scope.
Dec 09 16:03:43 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7174e8f4b25bab709c6f23890ef647f841c52031bcf7e9a57c04cead1985a022/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7174e8f4b25bab709c6f23890ef647f841c52031bcf7e9a57c04cead1985a022/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:43 compute-0 podman[78112]: 2025-12-09 16:03:43.725689852 +0000 UTC m=+0.040321218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:43 compute-0 podman[78112]: 2025-12-09 16:03:43.833480942 +0000 UTC m=+0.148112228 container init 0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98 (image=quay.io/ceph/ceph:v20, name=eloquent_lalande, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Dec 09 16:03:43 compute-0 podman[78112]: 2025-12-09 16:03:43.84292498 +0000 UTC m=+0.157556266 container start 0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98 (image=quay.io/ceph/ceph:v20, name=eloquent_lalande, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:03:43 compute-0 podman[78112]: 2025-12-09 16:03:43.846385173 +0000 UTC m=+0.161016529 container attach 0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98 (image=quay.io/ceph/ceph:v20, name=eloquent_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:03:43 compute-0 ceph-mgr[75515]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 16:03:43 compute-0 amazing_keller[78076]: [
Dec 09 16:03:43 compute-0 amazing_keller[78076]:     {
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "available": false,
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "being_replaced": false,
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "ceph_device_lvm": false,
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "lsm_data": {},
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "lvs": [],
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "path": "/dev/sr0",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "rejected_reasons": [
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "Insufficient space (<5GB)",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "Has a FileSystem"
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         ],
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         "sys_api": {
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "actuators": null,
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "device_nodes": [
Dec 09 16:03:43 compute-0 amazing_keller[78076]:                 "sr0"
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             ],
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "devname": "sr0",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "human_readable_size": "482.00 KB",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "id_bus": "ata",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "model": "QEMU DVD-ROM",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "nr_requests": "2",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "parent": "/dev/sr0",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "partitions": {},
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "path": "/dev/sr0",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "removable": "1",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "rev": "2.5+",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "ro": "0",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "rotational": "1",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "sas_address": "",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "sas_device_handle": "",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "scheduler_mode": "mq-deadline",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "sectors": 0,
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "sectorsize": "2048",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "size": 493568.0,
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "support_discard": "2048",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "type": "disk",
Dec 09 16:03:43 compute-0 amazing_keller[78076]:             "vendor": "QEMU"
Dec 09 16:03:43 compute-0 amazing_keller[78076]:         }
Dec 09 16:03:43 compute-0 amazing_keller[78076]:     }
Dec 09 16:03:43 compute-0 amazing_keller[78076]: ]
Dec 09 16:03:43 compute-0 systemd[1]: libpod-ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e.scope: Deactivated successfully.
Dec 09 16:03:43 compute-0 podman[78059]: 2025-12-09 16:03:43.895652012 +0000 UTC m=+0.755793292 container died ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ac9966b1d92ee9487009ed19d71dc417697aa1b7b68000a3980f905f7cde8e7-merged.mount: Deactivated successfully.
Dec 09 16:03:43 compute-0 podman[78059]: 2025-12-09 16:03:43.93814682 +0000 UTC m=+0.798288110 container remove ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 09 16:03:43 compute-0 systemd[1]: libpod-conmon-ce1ad9f592240ecb76b536ac209c0c25c225b49f59321d9a4ccaf0e8307fd58e.scope: Deactivated successfully.
Dec 09 16:03:43 compute-0 sudo[77909]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 09 16:03:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 09 16:03:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:03:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:44 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 09 16:03:44 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 09 16:03:44 compute-0 sudo[78772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 16:03:44 compute-0 sudo[78772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[78772]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[78797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph
Dec 09 16:03:44 compute-0 sudo[78797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[78797]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[78822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.conf.new
Dec 09 16:03:44 compute-0 sudo[78822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[78822]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[78847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:44 compute-0 sudo[78847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[78847]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:03:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 09 16:03:44 compute-0 sudo[78872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.conf.new
Dec 09 16:03:44 compute-0 sudo[78872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[78872]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3388378665' entity='client.admin' 
Dec 09 16:03:44 compute-0 systemd[1]: libpod-0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98.scope: Deactivated successfully.
Dec 09 16:03:44 compute-0 podman[78112]: 2025-12-09 16:03:44.4023874 +0000 UTC m=+0.717018706 container died 0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98 (image=quay.io/ceph/ceph:v20, name=eloquent_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:03:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7174e8f4b25bab709c6f23890ef647f841c52031bcf7e9a57c04cead1985a022-merged.mount: Deactivated successfully.
Dec 09 16:03:44 compute-0 podman[78112]: 2025-12-09 16:03:44.44677985 +0000 UTC m=+0.761411146 container remove 0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98 (image=quay.io/ceph/ceph:v20, name=eloquent_lalande, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:03:44 compute-0 systemd[1]: libpod-conmon-0a3fd1ad653a569ee79c2387402a978cb9b03173d192984c6b45003277ee5e98.scope: Deactivated successfully.
Dec 09 16:03:44 compute-0 sudo[78104]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[78933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.conf.new
Dec 09 16:03:44 compute-0 sudo[78933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:44 compute-0 sudo[78933]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[78958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.conf.new
Dec 09 16:03:44 compute-0 sudo[78958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[78958]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[78983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 09 16:03:44 compute-0 sudo[78983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[78983]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf
Dec 09 16:03:44 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf
Dec 09 16:03:44 compute-0 sudo[79008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config
Dec 09 16:03:44 compute-0 sudo[79008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[79008]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[79033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config
Dec 09 16:03:44 compute-0 sudo[79033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[79033]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[79059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf.new
Dec 09 16:03:44 compute-0 sudo[79059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[79059]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[79114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:44 compute-0 sudo[79114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[79114]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 sudo[79168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf.new
Dec 09 16:03:44 compute-0 sudo[79168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:44 compute-0 sudo[79168]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 09 16:03:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:44 compute-0 ceph-mon[75222]: Updating compute-0:/etc/ceph/ceph.conf
Dec 09 16:03:44 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3388378665' entity='client.admin' 
Dec 09 16:03:45 compute-0 sudo[79231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf.new
Dec 09 16:03:45 compute-0 sudo[79231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79231]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 sudo[79256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf.new
Dec 09 16:03:45 compute-0 sudo[79256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79256]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 sudo[79305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf.new /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf
Dec 09 16:03:45 compute-0 sudo[79305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79305]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 16:03:45 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 16:03:45 compute-0 sudo[79401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtfoomvgovuxohwyzakkgppeckpecxsf ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765296224.8090968-36637-185465408240280/async_wrapper.py j766969383853 30 /home/zuul/.ansible/tmp/ansible-tmp-1765296224.8090968-36637-185465408240280/AnsiballZ_command.py _'
Dec 09 16:03:45 compute-0 sudo[79401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:45 compute-0 sudo[79358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 16:03:45 compute-0 sudo[79358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79358]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 sudo[79406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph
Dec 09 16:03:45 compute-0 sudo[79406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79406]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 sudo[79431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.client.admin.keyring.new
Dec 09 16:03:45 compute-0 sudo[79431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79431]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 ansible-async_wrapper.py[79404]: Invoked with j766969383853 30 /home/zuul/.ansible/tmp/ansible-tmp-1765296224.8090968-36637-185465408240280/AnsiballZ_command.py _
Dec 09 16:03:45 compute-0 ansible-async_wrapper.py[79475]: Starting module and watcher
Dec 09 16:03:45 compute-0 ansible-async_wrapper.py[79475]: Start watching 79478 (30)
Dec 09 16:03:45 compute-0 ansible-async_wrapper.py[79478]: Start module (79478)
Dec 09 16:03:45 compute-0 ansible-async_wrapper.py[79404]: Return async_wrapper task started.
Dec 09 16:03:45 compute-0 sudo[79456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:45 compute-0 sudo[79401]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 sudo[79456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79456]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 sudo[79486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.client.admin.keyring.new
Dec 09 16:03:45 compute-0 sudo[79486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79486]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 python3[79482]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:03:45 compute-0 sudo[79535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.client.admin.keyring.new
Dec 09 16:03:45 compute-0 sudo[79535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79535]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 podman[79534]: 2025-12-09 16:03:45.694633851 +0000 UTC m=+0.053657983 container create c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d (image=quay.io/ceph/ceph:v20, name=condescending_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:45 compute-0 systemd[1]: Started libpod-conmon-c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d.scope.
Dec 09 16:03:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:45 compute-0 sudo[79572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.client.admin.keyring.new
Dec 09 16:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f87e1c7156001b1c1f0e1b39ee2d38185fe97f46d9ddf20845a1444abdb6b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f87e1c7156001b1c1f0e1b39ee2d38185fe97f46d9ddf20845a1444abdb6b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:45 compute-0 sudo[79572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79572]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 podman[79534]: 2025-12-09 16:03:45.767632604 +0000 UTC m=+0.126656766 container init c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d (image=quay.io/ceph/ceph:v20, name=condescending_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:45 compute-0 podman[79534]: 2025-12-09 16:03:45.677625015 +0000 UTC m=+0.036649177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:45 compute-0 podman[79534]: 2025-12-09 16:03:45.780337149 +0000 UTC m=+0.139361281 container start c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d (image=quay.io/ceph/ceph:v20, name=condescending_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:45 compute-0 podman[79534]: 2025-12-09 16:03:45.784117173 +0000 UTC m=+0.143141305 container attach c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d (image=quay.io/ceph/ceph:v20, name=condescending_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:03:45 compute-0 sudo[79602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 09 16:03:45 compute-0 sudo[79602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79602]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring
Dec 09 16:03:45 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring
Dec 09 16:03:45 compute-0 ceph-mgr[75515]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 09 16:03:45 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:45 compute-0 ceph-mon[75222]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 09 16:03:45 compute-0 sudo[79628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config
Dec 09 16:03:45 compute-0 sudo[79628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79628]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 sudo[79672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config
Dec 09 16:03:45 compute-0 sudo[79672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:45 compute-0 sudo[79672]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:45 compute-0 ceph-mon[75222]: Updating compute-0:/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.conf
Dec 09 16:03:45 compute-0 ceph-mon[75222]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 16:03:45 compute-0 ceph-mon[75222]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 09 16:03:46 compute-0 sudo[79697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring.new
Dec 09 16:03:46 compute-0 sudo[79697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:46 compute-0 sudo[79697]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:46 compute-0 sudo[79722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:46 compute-0 sudo[79722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:46 compute-0 sudo[79722]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:46 compute-0 sudo[79747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring.new
Dec 09 16:03:46 compute-0 sudo[79747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:46 compute-0 sudo[79747]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:46 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:03:46 compute-0 condescending_merkle[79597]: 
Dec 09 16:03:46 compute-0 condescending_merkle[79597]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 09 16:03:46 compute-0 systemd[1]: libpod-c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d.scope: Deactivated successfully.
Dec 09 16:03:46 compute-0 podman[79534]: 2025-12-09 16:03:46.233274471 +0000 UTC m=+0.592298643 container died c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d (image=quay.io/ceph/ceph:v20, name=condescending_merkle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-98f87e1c7156001b1c1f0e1b39ee2d38185fe97f46d9ddf20845a1444abdb6b5-merged.mount: Deactivated successfully.
Dec 09 16:03:46 compute-0 podman[79534]: 2025-12-09 16:03:46.28469487 +0000 UTC m=+0.643719002 container remove c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d (image=quay.io/ceph/ceph:v20, name=condescending_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 09 16:03:46 compute-0 systemd[1]: libpod-conmon-c0b73449619ca4a15a2d7bc83f74ff3cd7ad12d217b5e69a15dd043a8af40b1d.scope: Deactivated successfully.
Dec 09 16:03:46 compute-0 ansible-async_wrapper.py[79478]: Module complete (79478)
Dec 09 16:03:46 compute-0 sudo[79807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring.new
Dec 09 16:03:46 compute-0 sudo[79807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:46 compute-0 sudo[79807]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:46 compute-0 sudo[79832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring.new
Dec 09 16:03:46 compute-0 sudo[79832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:46 compute-0 sudo[79832]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:46 compute-0 sudo[79857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-67f67f44-54fc-54ea-8df0-10931b6ecdaf/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring.new /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring
Dec 09 16:03:46 compute-0 sudo[79857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:46 compute-0 sudo[79857]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:46 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:03:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:46 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev b5a37908-4c59-4d53-9eb4-1cb7d55d7bd3 (Updating crash deployment (+1 -> 1))
Dec 09 16:03:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 09 16:03:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 09 16:03:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 09 16:03:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:46 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:46 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 09 16:03:46 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 09 16:03:46 compute-0 sudo[79882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:46 compute-0 sudo[79882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:46 compute-0 sudo[79882]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:46 compute-0 sudo[79930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:46 compute-0 sudo[79930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:46 compute-0 sudo[79978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgrlknsbbylicocxrdoavslmdltxwdmm ; /usr/bin/python3'
Dec 09 16:03:46 compute-0 sudo[79978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:46 compute-0 python3[79980]: ansible-ansible.legacy.async_status Invoked with jid=j766969383853.79404 mode=status _async_dir=/root/.ansible_async
Dec 09 16:03:46 compute-0 sudo[79978]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:47 compute-0 podman[80045]: 2025-12-09 16:03:47.084000522 +0000 UTC m=+0.041709293 container create 98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:03:47 compute-0 sudo[80082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhaquemdanysvzzbwuurgcaxsfjmrhyt ; /usr/bin/python3'
Dec 09 16:03:47 compute-0 sudo[80082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:47 compute-0 systemd[1]: Started libpod-conmon-98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260.scope.
Dec 09 16:03:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:47 compute-0 podman[80045]: 2025-12-09 16:03:47.160686966 +0000 UTC m=+0.118395747 container init 98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_pare, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:03:47 compute-0 podman[80045]: 2025-12-09 16:03:47.068403262 +0000 UTC m=+0.026112093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:03:47 compute-0 podman[80045]: 2025-12-09 16:03:47.167972434 +0000 UTC m=+0.125681205 container start 98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_pare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 09 16:03:47 compute-0 podman[80045]: 2025-12-09 16:03:47.171187749 +0000 UTC m=+0.128896550 container attach 98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:03:47 compute-0 flamboyant_pare[80087]: 167 167
Dec 09 16:03:47 compute-0 systemd[1]: libpod-98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260.scope: Deactivated successfully.
Dec 09 16:03:47 compute-0 podman[80045]: 2025-12-09 16:03:47.172987028 +0000 UTC m=+0.130695809 container died 98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd9332baf3573cc08935a7afb2a5af0853e60879cb50d7d7c6938be02edb66a-merged.mount: Deactivated successfully.
Dec 09 16:03:47 compute-0 podman[80045]: 2025-12-09 16:03:47.213101588 +0000 UTC m=+0.170810359 container remove 98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:47 compute-0 systemd[1]: libpod-conmon-98c8f8170514a586ae10641e2a86e96760f40d24db59881ee901f84fed316260.scope: Deactivated successfully.
Dec 09 16:03:47 compute-0 python3[80084]: ansible-ansible.legacy.async_status Invoked with jid=j766969383853.79404 mode=cleanup _async_dir=/root/.ansible_async
Dec 09 16:03:47 compute-0 sudo[80082]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:47 compute-0 systemd[1]: Reloading.
Dec 09 16:03:47 compute-0 systemd-rc-local-generator[80132]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:47 compute-0 systemd-sysv-generator[80136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:47 compute-0 ceph-mon[75222]: Updating compute-0:/var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/config/ceph.client.admin.keyring
Dec 09 16:03:47 compute-0 ceph-mon[75222]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:47 compute-0 ceph-mon[75222]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:03:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 09 16:03:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 09 16:03:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:47 compute-0 ceph-mon[75222]: Deploying daemon crash.compute-0 on compute-0
Dec 09 16:03:47 compute-0 systemd[1]: Reloading.
Dec 09 16:03:47 compute-0 systemd-rc-local-generator[80198]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:47 compute-0 systemd-sysv-generator[80202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:47 compute-0 sudo[80169]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmifrkmpnhiduwizzohptfpdvovbcvw ; /usr/bin/python3'
Dec 09 16:03:47 compute-0 sudo[80169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:47 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:03:47 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:47 compute-0 python3[80207]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 16:03:48 compute-0 sudo[80169]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:48 compute-0 podman[80257]: 2025-12-09 16:03:48.155992259 +0000 UTC m=+0.048892957 container create 1d6c84974bebf4aec64bc5aa950bd94610adc6953e493e81f66035a7be677973 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06d8f96dc8caece02c3c26d657a19a110817749d0b00df484983688552a82b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06d8f96dc8caece02c3c26d657a19a110817749d0b00df484983688552a82b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06d8f96dc8caece02c3c26d657a19a110817749d0b00df484983688552a82b0/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06d8f96dc8caece02c3c26d657a19a110817749d0b00df484983688552a82b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:48 compute-0 podman[80257]: 2025-12-09 16:03:48.216146024 +0000 UTC m=+0.109046752 container init 1d6c84974bebf4aec64bc5aa950bd94610adc6953e493e81f66035a7be677973 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:03:48 compute-0 podman[80257]: 2025-12-09 16:03:48.220887518 +0000 UTC m=+0.113788216 container start 1d6c84974bebf4aec64bc5aa950bd94610adc6953e493e81f66035a7be677973 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:03:48 compute-0 bash[80257]: 1d6c84974bebf4aec64bc5aa950bd94610adc6953e493e81f66035a7be677973
Dec 09 16:03:48 compute-0 podman[80257]: 2025-12-09 16:03:48.133992201 +0000 UTC m=+0.026892919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:03:48 compute-0 systemd[1]: Started Ceph crash.compute-0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 09 16:03:48 compute-0 sudo[79930]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:48 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev b5a37908-4c59-4d53-9eb4-1cb7d55d7bd3 (Updating crash deployment (+1 -> 1))
Dec 09 16:03:48 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event b5a37908-4c59-4d53-9eb4-1cb7d55d7bd3 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 09 16:03:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:48 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev 663827d1-61f8-43c0-bbe9-9beace5efdfb (Updating mgr deployment (+1 -> 2))
Dec 09 16:03:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.sjfqtt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.sjfqtt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.sjfqtt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 09 16:03:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mgr services"} : dispatch
Dec 09 16:03:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:48 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:48 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.sjfqtt on compute-0
Dec 09 16:03:48 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.sjfqtt on compute-0
Dec 09 16:03:48 compute-0 sudo[80302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwwaumfsszuaaygwynukpgujjaszaff ; /usr/bin/python3'
Dec 09 16:03:48 compute-0 sudo[80302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: 2025-12-09T16:03:48.378+0000 7f8428fd7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: 2025-12-09T16:03:48.378+0000 7f8428fd7640 -1 AuthRegistry(0x7f8424052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: 2025-12-09T16:03:48.380+0000 7f8428fd7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: 2025-12-09T16:03:48.380+0000 7f8428fd7640 -1 AuthRegistry(0x7f8428fd5fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: 2025-12-09T16:03:48.381+0000 7f8422575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: 2025-12-09T16:03:48.381+0000 7f8428fd7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 09 16:03:48 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-crash-compute-0[80272]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 09 16:03:48 compute-0 sudo[80303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:48 compute-0 sudo[80303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:48 compute-0 sudo[80303]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:48 compute-0 sudo[80340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:48 compute-0 sudo[80340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:48 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:48 compute-0 python3[80316]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:03:48 compute-0 podman[80365]: 2025-12-09 16:03:48.583674226 +0000 UTC m=+0.057783598 container create a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4 (image=quay.io/ceph/ceph:v20, name=sharp_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:48 compute-0 systemd[1]: Started libpod-conmon-a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4.scope.
Dec 09 16:03:48 compute-0 podman[80365]: 2025-12-09 16:03:48.565308166 +0000 UTC m=+0.039417548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:48 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f1d141860a30375329c1556f63b148d546f787046e2fdc91059793f1139e62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f1d141860a30375329c1556f63b148d546f787046e2fdc91059793f1139e62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f1d141860a30375329c1556f63b148d546f787046e2fdc91059793f1139e62/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:48 compute-0 podman[80365]: 2025-12-09 16:03:48.691864029 +0000 UTC m=+0.165973391 container init a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4 (image=quay.io/ceph/ceph:v20, name=sharp_bartik, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:48 compute-0 podman[80365]: 2025-12-09 16:03:48.703288092 +0000 UTC m=+0.177397454 container start a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4 (image=quay.io/ceph/ceph:v20, name=sharp_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:03:48 compute-0 podman[80365]: 2025-12-09 16:03:48.70691341 +0000 UTC m=+0.181022852 container attach a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4 (image=quay.io/ceph/ceph:v20, name=sharp_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:03:48 compute-0 podman[80442]: 2025-12-09 16:03:48.960165241 +0000 UTC m=+0.056029551 container create 1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:03:49 compute-0 systemd[1]: Started libpod-conmon-1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810.scope.
Dec 09 16:03:49 compute-0 podman[80442]: 2025-12-09 16:03:48.930913845 +0000 UTC m=+0.026778175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:03:49 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:49 compute-0 podman[80442]: 2025-12-09 16:03:49.048563568 +0000 UTC m=+0.144427898 container init 1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:03:49 compute-0 podman[80442]: 2025-12-09 16:03:49.058461861 +0000 UTC m=+0.154326171 container start 1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_roentgen, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:03:49 compute-0 podman[80442]: 2025-12-09 16:03:49.06240851 +0000 UTC m=+0.158272790 container attach 1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_roentgen, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:49 compute-0 amazing_roentgen[80458]: 167 167
Dec 09 16:03:49 compute-0 systemd[1]: libpod-1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810.scope: Deactivated successfully.
Dec 09 16:03:49 compute-0 podman[80442]: 2025-12-09 16:03:49.064586571 +0000 UTC m=+0.160450881 container died 1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aded67e8bc9f15835915ceeb9e68899d05a6b4dcff2c50bb79683ea226802b8-merged.mount: Deactivated successfully.
Dec 09 16:03:49 compute-0 podman[80442]: 2025-12-09 16:03:49.111221054 +0000 UTC m=+0.207085354 container remove 1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_roentgen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:03:49 compute-0 systemd[1]: libpod-conmon-1aff4a999b3dbb6a029d9df062d6b49b2cbe957c391f8d03b72296f407b00810.scope: Deactivated successfully.
Dec 09 16:03:49 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:03:49 compute-0 sharp_bartik[80380]: 
Dec 09 16:03:49 compute-0 sharp_bartik[80380]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 09 16:03:49 compute-0 systemd[1]: libpod-a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4.scope: Deactivated successfully.
Dec 09 16:03:49 compute-0 podman[80365]: 2025-12-09 16:03:49.146794855 +0000 UTC m=+0.620904227 container died a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4 (image=quay.io/ceph/ceph:v20, name=sharp_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:49 compute-0 systemd[1]: Reloading.
Dec 09 16:03:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:03:49 compute-0 systemd-rc-local-generator[80517]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:49 compute-0 systemd-sysv-generator[80521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:49 compute-0 ceph-mon[75222]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.sjfqtt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.sjfqtt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mgr services"} : dispatch
Dec 09 16:03:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:49 compute-0 ceph-mon[75222]: Deploying daemon mgr.compute-0.sjfqtt on compute-0
Dec 09 16:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f1d141860a30375329c1556f63b148d546f787046e2fdc91059793f1139e62-merged.mount: Deactivated successfully.
Dec 09 16:03:49 compute-0 podman[80365]: 2025-12-09 16:03:49.467662254 +0000 UTC m=+0.941771616 container remove a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4 (image=quay.io/ceph/ceph:v20, name=sharp_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:03:49 compute-0 systemd[1]: libpod-conmon-a5325192d50fd998727b6c9fa00dfe75a9ffd25ef48b26d08638a38ffc9b7dc4.scope: Deactivated successfully.
Dec 09 16:03:49 compute-0 sudo[80302]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:49 compute-0 systemd[1]: Reloading.
Dec 09 16:03:49 compute-0 systemd-sysv-generator[80558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:49 compute-0 systemd-rc-local-generator[80554]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:49 compute-0 systemd[1]: Starting Ceph mgr.compute-0.sjfqtt for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:03:49 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:49 compute-0 sudo[80596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwozmqwolgkvnqlfrbvicenzzobxynf ; /usr/bin/python3'
Dec 09 16:03:49 compute-0 sudo[80596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:50 compute-0 python3[80603]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:03:50 compute-0 podman[80641]: 2025-12-09 16:03:50.130061086 +0000 UTC m=+0.051352098 container create b3b17105391d4a64fbd08f9c7bbd1c6ae25622203819d0918689b0090c91d47c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d49ab8966897121293d2e72f8b0befb75cb9d4f6511633b5c72c3caf7825cd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d49ab8966897121293d2e72f8b0befb75cb9d4f6511633b5c72c3caf7825cd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d49ab8966897121293d2e72f8b0befb75cb9d4f6511633b5c72c3caf7825cd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d49ab8966897121293d2e72f8b0befb75cb9d4f6511633b5c72c3caf7825cd7/merged/var/lib/ceph/mgr/ceph-compute-0.sjfqtt supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:50 compute-0 podman[80647]: 2025-12-09 16:03:50.180383489 +0000 UTC m=+0.073386438 container create fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175 (image=quay.io/ceph/ceph:v20, name=determined_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:03:50 compute-0 podman[80641]: 2025-12-09 16:03:50.18808456 +0000 UTC m=+0.109375572 container init b3b17105391d4a64fbd08f9c7bbd1c6ae25622203819d0918689b0090c91d47c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:03:50 compute-0 podman[80641]: 2025-12-09 16:03:50.199041858 +0000 UTC m=+0.120332870 container start b3b17105391d4a64fbd08f9c7bbd1c6ae25622203819d0918689b0090c91d47c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:03:50 compute-0 podman[80641]: 2025-12-09 16:03:50.105345148 +0000 UTC m=+0.026636210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:03:50 compute-0 bash[80641]: b3b17105391d4a64fbd08f9c7bbd1c6ae25622203819d0918689b0090c91d47c
Dec 09 16:03:50 compute-0 systemd[1]: Started Ceph mgr.compute-0.sjfqtt for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:50 compute-0 systemd[1]: Started libpod-conmon-fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175.scope.
Dec 09 16:03:50 compute-0 ceph-mgr[80673]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:03:50 compute-0 ceph-mgr[80673]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 09 16:03:50 compute-0 ceph-mgr[80673]: pidfile_write: ignore empty --pid-file
Dec 09 16:03:50 compute-0 podman[80647]: 2025-12-09 16:03:50.155967662 +0000 UTC m=+0.048970651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:50 compute-0 sudo[80340]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:50 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:50 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e57f128f03431b834fa317d44a211c046e5cd363266845e242f2acdc694233/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e57f128f03431b834fa317d44a211c046e5cd363266845e242f2acdc694233/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e57f128f03431b834fa317d44a211c046e5cd363266845e242f2acdc694233/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:50 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'alerts'
Dec 09 16:03:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 16:03:50 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:50 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev 663827d1-61f8-43c0-bbe9-9beace5efdfb (Updating mgr deployment (+1 -> 2))
Dec 09 16:03:50 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 663827d1-61f8-43c0-bbe9-9beace5efdfb (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec 09 16:03:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 16:03:50 compute-0 ceph-mon[75222]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:03:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:50 compute-0 podman[80647]: 2025-12-09 16:03:50.311118867 +0000 UTC m=+0.204121846 container init fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175 (image=quay.io/ceph/ceph:v20, name=determined_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:03:50 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:50 compute-0 podman[80647]: 2025-12-09 16:03:50.319299334 +0000 UTC m=+0.212302283 container start fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175 (image=quay.io/ceph/ceph:v20, name=determined_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:03:50 compute-0 podman[80647]: 2025-12-09 16:03:50.322953964 +0000 UTC m=+0.215956923 container attach fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175 (image=quay.io/ceph/ceph:v20, name=determined_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:50 compute-0 sudo[80700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:03:50 compute-0 sudo[80700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:50 compute-0 sudo[80700]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:50 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'balancer'
Dec 09 16:03:50 compute-0 sudo[80725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:50 compute-0 sudo[80725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:50 compute-0 sudo[80725]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:50 compute-0 ansible-async_wrapper.py[79475]: Done in kid B.
Dec 09 16:03:50 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'cephadm'
Dec 09 16:03:50 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:50 compute-0 sudo[80769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:03:50 compute-0 sudo[80769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 09 16:03:50 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1834522179' entity='client.admin' 
Dec 09 16:03:50 compute-0 systemd[1]: libpod-fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175.scope: Deactivated successfully.
Dec 09 16:03:50 compute-0 podman[80647]: 2025-12-09 16:03:50.750425163 +0000 UTC m=+0.643428162 container died fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175 (image=quay.io/ceph/ceph:v20, name=determined_solomon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3e57f128f03431b834fa317d44a211c046e5cd363266845e242f2acdc694233-merged.mount: Deactivated successfully.
Dec 09 16:03:50 compute-0 podman[80647]: 2025-12-09 16:03:50.805188002 +0000 UTC m=+0.698190991 container remove fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175 (image=quay.io/ceph/ceph:v20, name=determined_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:03:50 compute-0 systemd[1]: libpod-conmon-fa205b27d9ee2da1103c73c2a67b0f7cf0ea11926e60fe9adf5c9648478d3175.scope: Deactivated successfully.
Dec 09 16:03:50 compute-0 sudo[80596]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:50 compute-0 podman[80851]: 2025-12-09 16:03:50.947112447 +0000 UTC m=+0.055768662 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 16:03:51 compute-0 sudo[80905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvpmwqhanwpzcakyqmlovzgifirdcwos ; /usr/bin/python3'
Dec 09 16:03:51 compute-0 sudo[80905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:51 compute-0 podman[80851]: 2025-12-09 16:03:51.037808749 +0000 UTC m=+0.146464894 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:51 compute-0 python3[80907]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:03:51 compute-0 podman[80951]: 2025-12-09 16:03:51.246477143 +0000 UTC m=+0.053152687 container create e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4 (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:51 compute-0 systemd[1]: Started libpod-conmon-e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4.scope.
Dec 09 16:03:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10873e6a9abd15fefb029ee2383d7db06211599fbb8458dc20802b75dc5ba9e5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10873e6a9abd15fefb029ee2383d7db06211599fbb8458dc20802b75dc5ba9e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10873e6a9abd15fefb029ee2383d7db06211599fbb8458dc20802b75dc5ba9e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:51 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:51 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:51 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1834522179' entity='client.admin' 
Dec 09 16:03:51 compute-0 podman[80951]: 2025-12-09 16:03:51.225475647 +0000 UTC m=+0.032151221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:51 compute-0 podman[80951]: 2025-12-09 16:03:51.328313535 +0000 UTC m=+0.134989089 container init e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4 (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:51 compute-0 podman[80951]: 2025-12-09 16:03:51.335766179 +0000 UTC m=+0.142441723 container start e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4 (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:03:51 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'crash'
Dec 09 16:03:51 compute-0 podman[80951]: 2025-12-09 16:03:51.338733866 +0000 UTC m=+0.145409440 container attach e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4 (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:03:51 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'dashboard'
Dec 09 16:03:51 compute-0 sudo[80769]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:51 compute-0 ceph-mgr[75515]: [progress INFO root] Writing back 2 completed events
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:51 compute-0 sudo[81033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:03:51 compute-0 sudo[81033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:51 compute-0 sudo[81033]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:51 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 09 16:03:51 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:51 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 09 16:03:51 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 09 16:03:51 compute-0 sudo[81058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:51 compute-0 sudo[81058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:51 compute-0 sudo[81058]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:51 compute-0 sudo[81083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:51 compute-0 sudo[81083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 09 16:03:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/85454318' entity='client.admin' 
Dec 09 16:03:51 compute-0 systemd[1]: libpod-e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4.scope: Deactivated successfully.
Dec 09 16:03:51 compute-0 conmon[80980]: conmon e00e5e046c5b41deb826 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4.scope/container/memory.events
Dec 09 16:03:51 compute-0 podman[80951]: 2025-12-09 16:03:51.809237511 +0000 UTC m=+0.615913055 container died e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4 (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-10873e6a9abd15fefb029ee2383d7db06211599fbb8458dc20802b75dc5ba9e5-merged.mount: Deactivated successfully.
Dec 09 16:03:51 compute-0 podman[80951]: 2025-12-09 16:03:51.849657811 +0000 UTC m=+0.656333355 container remove e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4 (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 16:03:51 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:51 compute-0 systemd[1]: libpod-conmon-e00e5e046c5b41deb8265c0845758acd4d3464e2e8a29f6c6d0a9c5542d6c7f4.scope: Deactivated successfully.
Dec 09 16:03:51 compute-0 sudo[80905]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:51 compute-0 podman[81135]: 2025-12-09 16:03:51.969584357 +0000 UTC m=+0.040482883 container create 82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606 (image=quay.io/ceph/ceph:v20, name=xenodochial_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:52 compute-0 systemd[1]: Started libpod-conmon-82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606.scope.
Dec 09 16:03:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:52 compute-0 podman[81135]: 2025-12-09 16:03:51.948708655 +0000 UTC m=+0.019607171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:52 compute-0 podman[81135]: 2025-12-09 16:03:52.063021928 +0000 UTC m=+0.133920514 container init 82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606 (image=quay.io/ceph/ceph:v20, name=xenodochial_neumann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:52 compute-0 podman[81135]: 2025-12-09 16:03:52.071379031 +0000 UTC m=+0.142277537 container start 82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606 (image=quay.io/ceph/ceph:v20, name=xenodochial_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 16:03:52 compute-0 podman[81135]: 2025-12-09 16:03:52.075798386 +0000 UTC m=+0.146696922 container attach 82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606 (image=quay.io/ceph/ceph:v20, name=xenodochial_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:52 compute-0 xenodochial_neumann[81152]: 167 167
Dec 09 16:03:52 compute-0 systemd[1]: libpod-82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606.scope: Deactivated successfully.
Dec 09 16:03:52 compute-0 podman[81135]: 2025-12-09 16:03:52.078225165 +0000 UTC m=+0.149123711 container died 82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606 (image=quay.io/ceph/ceph:v20, name=xenodochial_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:52 compute-0 sudo[81179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqulymxxlfxeedsfuksdggryyeaatclf ; /usr/bin/python3'
Dec 09 16:03:52 compute-0 sudo[81179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d192eb60b5cf0e1a944e8c0576240d0187b2d14e665f8214d0a7ae6801148b-merged.mount: Deactivated successfully.
Dec 09 16:03:52 compute-0 podman[81135]: 2025-12-09 16:03:52.138833854 +0000 UTC m=+0.209732370 container remove 82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606 (image=quay.io/ceph/ceph:v20, name=xenodochial_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:52 compute-0 systemd[1]: libpod-conmon-82c1c496b5324d84131bb5a30e6a4fc03cd002a3a752f767afb0295dcab3b606.scope: Deactivated successfully.
Dec 09 16:03:52 compute-0 sudo[81083]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:52 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'devicehealth'
Dec 09 16:03:52 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:52 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ysegzv (unknown last config time)...
Dec 09 16:03:52 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ysegzv (unknown last config time)...
Dec 09 16:03:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ysegzv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 09 16:03:52 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ysegzv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 09 16:03:52 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mgr services"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:52 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ysegzv on compute-0
Dec 09 16:03:52 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ysegzv on compute-0
Dec 09 16:03:52 compute-0 python3[81188]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:03:52 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'diskprediction_local'
Dec 09 16:03:52 compute-0 sudo[81196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:52 compute-0 sudo[81196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:52 compute-0 sudo[81196]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:52 compute-0 podman[81220]: 2025-12-09 16:03:52.33191357 +0000 UTC m=+0.051037428 container create 3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2 (image=quay.io/ceph/ceph:v20, name=fervent_noyce, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:52 compute-0 sudo[81222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:03:52 compute-0 sudo[81222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:52 compute-0 systemd[1]: Started libpod-conmon-3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2.scope.
Dec 09 16:03:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:52 compute-0 podman[81220]: 2025-12-09 16:03:52.306610053 +0000 UTC m=+0.025733931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024d0ddbeae5f6610d2e5449b5c4b59a605020c3ea7009d4d8731706209f798d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024d0ddbeae5f6610d2e5449b5c4b59a605020c3ea7009d4d8731706209f798d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024d0ddbeae5f6610d2e5449b5c4b59a605020c3ea7009d4d8731706209f798d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:52 compute-0 podman[81220]: 2025-12-09 16:03:52.415591802 +0000 UTC m=+0.134715670 container init 3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2 (image=quay.io/ceph/ceph:v20, name=fervent_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 09 16:03:52 compute-0 podman[81220]: 2025-12-09 16:03:52.422176287 +0000 UTC m=+0.141300135 container start 3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2 (image=quay.io/ceph/ceph:v20, name=fervent_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:52 compute-0 podman[81220]: 2025-12-09 16:03:52.425887658 +0000 UTC m=+0.145011506 container attach 3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2 (image=quay.io/ceph/ceph:v20, name=fervent_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:03:52 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt[80668]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 09 16:03:52 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt[80668]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 09 16:03:52 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt[80668]:   from numpy import show_config as show_numpy_config
Dec 09 16:03:52 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'influx'
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/85454318' entity='client.admin' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ysegzv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mgr services"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:52 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'insights'
Dec 09 16:03:52 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'iostat'
Dec 09 16:03:52 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'k8sevents'
Dec 09 16:03:52 compute-0 podman[81300]: 2025-12-09 16:03:52.668084928 +0000 UTC m=+0.048511555 container create 7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70 (image=quay.io/ceph/ceph:v20, name=quirky_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:52 compute-0 systemd[1]: Started libpod-conmon-7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70.scope.
Dec 09 16:03:52 compute-0 podman[81300]: 2025-12-09 16:03:52.646787362 +0000 UTC m=+0.027214029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:52 compute-0 podman[81300]: 2025-12-09 16:03:52.75940829 +0000 UTC m=+0.139834967 container init 7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70 (image=quay.io/ceph/ceph:v20, name=quirky_elion, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:03:52 compute-0 podman[81300]: 2025-12-09 16:03:52.76919569 +0000 UTC m=+0.149622357 container start 7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70 (image=quay.io/ceph/ceph:v20, name=quirky_elion, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:52 compute-0 podman[81300]: 2025-12-09 16:03:52.773353265 +0000 UTC m=+0.153779902 container attach 7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70 (image=quay.io/ceph/ceph:v20, name=quirky_elion, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:03:52 compute-0 quirky_elion[81316]: 167 167
Dec 09 16:03:52 compute-0 systemd[1]: libpod-7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70.scope: Deactivated successfully.
Dec 09 16:03:52 compute-0 podman[81300]: 2025-12-09 16:03:52.775159084 +0000 UTC m=+0.155585701 container died 7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70 (image=quay.io/ceph/ceph:v20, name=quirky_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:03:52 compute-0 podman[81300]: 2025-12-09 16:03:52.810789858 +0000 UTC m=+0.191216525 container remove 7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70 (image=quay.io/ceph/ceph:v20, name=quirky_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:03:52 compute-0 systemd[1]: libpod-conmon-7b6212816b0b6a4abbe853437542f1024e32d5d595097ccb6aa0d1a90a470e70.scope: Deactivated successfully.
Dec 09 16:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a435d7e52a063c2af8bc7e0c906af02a74bfc6a637ac6297b2e7ad75508eb271-merged.mount: Deactivated successfully.
Dec 09 16:03:52 compute-0 sudo[81222]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:52 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 09 16:03:52 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2441551815' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 09 16:03:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:52 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:52 compute-0 sudo[81334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:52 compute-0 sudo[81334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:52 compute-0 sudo[81334]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:53 compute-0 sudo[81359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:03:53 compute-0 sudo[81359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:53 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'localpool'
Dec 09 16:03:53 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'mds_autoscaler'
Dec 09 16:03:53 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'mirroring'
Dec 09 16:03:53 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'nfs'
Dec 09 16:03:53 compute-0 ceph-mon[75222]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 09 16:03:53 compute-0 ceph-mon[75222]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 09 16:03:53 compute-0 ceph-mon[75222]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:53 compute-0 ceph-mon[75222]: Reconfiguring mgr.compute-0.ysegzv (unknown last config time)...
Dec 09 16:03:53 compute-0 ceph-mon[75222]: Reconfiguring daemon mgr.compute-0.ysegzv on compute-0
Dec 09 16:03:53 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:53 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2441551815' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 09 16:03:53 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:53 compute-0 podman[81428]: 2025-12-09 16:03:53.483175026 +0000 UTC m=+0.079004941 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:03:53 compute-0 podman[81428]: 2025-12-09 16:03:53.567445848 +0000 UTC m=+0.163275773 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:53 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'orchestrator'
Dec 09 16:03:53 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 09 16:03:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:03:53 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2441551815' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 09 16:03:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 09 16:03:53 compute-0 fervent_noyce[81261]: set require_min_compat_client to mimic
Dec 09 16:03:53 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 09 16:03:53 compute-0 podman[81220]: 2025-12-09 16:03:53.899745149 +0000 UTC m=+1.618869007 container died 3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2 (image=quay.io/ceph/ceph:v20, name=fervent_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 09 16:03:53 compute-0 systemd[1]: libpod-3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2.scope: Deactivated successfully.
Dec 09 16:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-024d0ddbeae5f6610d2e5449b5c4b59a605020c3ea7009d4d8731706209f798d-merged.mount: Deactivated successfully.
Dec 09 16:03:53 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'osd_perf_query'
Dec 09 16:03:53 compute-0 podman[81220]: 2025-12-09 16:03:53.945354468 +0000 UTC m=+1.664478326 container remove 3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2 (image=quay.io/ceph/ceph:v20, name=fervent_noyce, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:53 compute-0 systemd[1]: libpod-conmon-3f90221aed6698a3150faec2baff54fa42f05058262f1bae6e15d18964713fa2.scope: Deactivated successfully.
Dec 09 16:03:53 compute-0 sudo[81179]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:54 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'osd_support'
Dec 09 16:03:54 compute-0 sudo[81359]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:03:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:03:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:54 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'pg_autoscaler'
Dec 09 16:03:54 compute-0 sudo[81551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:03:54 compute-0 sudo[81551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:54 compute-0 sudo[81551]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:54 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'progress'
Dec 09 16:03:54 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'prometheus'
Dec 09 16:03:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:03:54 compute-0 sudo[81599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klruwmccwpdhrocaejgkosocdcezwtqp ; /usr/bin/python3'
Dec 09 16:03:54 compute-0 sudo[81599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:54 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:54 compute-0 python3[81601]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:03:54 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'rbd_support'
Dec 09 16:03:54 compute-0 podman[81602]: 2025-12-09 16:03:54.65707654 +0000 UTC m=+0.051851364 container create 035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f (image=quay.io/ceph/ceph:v20, name=nifty_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:03:54 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'rgw'
Dec 09 16:03:54 compute-0 systemd[1]: Started libpod-conmon-035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f.scope.
Dec 09 16:03:54 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9d7ea97be03d465309256c0feec95d54fc19f537715cf0b13be7c1ec840349/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9d7ea97be03d465309256c0feec95d54fc19f537715cf0b13be7c1ec840349/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9d7ea97be03d465309256c0feec95d54fc19f537715cf0b13be7c1ec840349/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:54 compute-0 podman[81602]: 2025-12-09 16:03:54.637709488 +0000 UTC m=+0.032484342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:54 compute-0 podman[81602]: 2025-12-09 16:03:54.753274942 +0000 UTC m=+0.148049786 container init 035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f (image=quay.io/ceph/ceph:v20, name=nifty_tharp, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:03:54 compute-0 podman[81602]: 2025-12-09 16:03:54.760227599 +0000 UTC m=+0.155002413 container start 035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f (image=quay.io/ceph/ceph:v20, name=nifty_tharp, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:54 compute-0 podman[81602]: 2025-12-09 16:03:54.763284418 +0000 UTC m=+0.158059242 container attach 035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f (image=quay.io/ceph/ceph:v20, name=nifty_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:54 compute-0 ceph-mon[75222]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:54 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2441551815' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 09 16:03:54 compute-0 ceph-mon[75222]: osdmap e3: 0 total, 0 up, 0 in
Dec 09 16:03:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:54 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'rook'
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:55 compute-0 sudo[81641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:55 compute-0 sudo[81641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:55 compute-0 sudo[81641]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:55 compute-0 sudo[81666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 09 16:03:55 compute-0 sudo[81666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:55 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'selftest'
Dec 09 16:03:55 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'smb'
Dec 09 16:03:55 compute-0 sudo[81666]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: [cephadm INFO root] Added host compute-0
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 09 16:03:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev 6f54e72e-3af3-418a-a633-5c8ea9245333 (Updating mgr deployment (-1 -> 1))
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.sjfqtt from compute-0 -- ports [8765]
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.sjfqtt from compute-0 -- ports [8765]
Dec 09 16:03:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 nifty_tharp[81617]: Added host 'compute-0' with addr '192.168.122.100'
Dec 09 16:03:55 compute-0 nifty_tharp[81617]: Scheduled mon update...
Dec 09 16:03:55 compute-0 nifty_tharp[81617]: Scheduled mgr update...
Dec 09 16:03:55 compute-0 nifty_tharp[81617]: Scheduled osd.default_drive_group update...
Dec 09 16:03:55 compute-0 systemd[1]: libpod-035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f.scope: Deactivated successfully.
Dec 09 16:03:55 compute-0 podman[81602]: 2025-12-09 16:03:55.72851153 +0000 UTC m=+1.123286394 container died 035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f (image=quay.io/ceph/ceph:v20, name=nifty_tharp, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf9d7ea97be03d465309256c0feec95d54fc19f537715cf0b13be7c1ec840349-merged.mount: Deactivated successfully.
Dec 09 16:03:55 compute-0 podman[81602]: 2025-12-09 16:03:55.778581435 +0000 UTC m=+1.173356299 container remove 035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f (image=quay.io/ceph/ceph:v20, name=nifty_tharp, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:03:55 compute-0 sudo[81711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:55 compute-0 systemd[1]: libpod-conmon-035c31db7f62159a138b9947f61c3dc55c7647cab9aa545f5aba33915294bc3f.scope: Deactivated successfully.
Dec 09 16:03:55 compute-0 sudo[81711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:55 compute-0 sudo[81711]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:55 compute-0 sudo[81599]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:55 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:55 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'snap_schedule'
Dec 09 16:03:55 compute-0 sudo[81749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --name mgr.compute-0.sjfqtt --force --tcp-ports 8765
Dec 09 16:03:55 compute-0 sudo[81749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:55 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'stats'
Dec 09 16:03:56 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'status'
Dec 09 16:03:56 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'telegraf'
Dec 09 16:03:56 compute-0 sudo[81804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eknwuljbpjxdjieslyejzspywgngqeic ; /usr/bin/python3'
Dec 09 16:03:56 compute-0 sudo[81804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:03:56 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'telemetry'
Dec 09 16:03:56 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.sjfqtt for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:03:56 compute-0 python3[81811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:03:56 compute-0 podman[81828]: 2025-12-09 16:03:56.342590453 +0000 UTC m=+0.051618076 container create 218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142 (image=quay.io/ceph/ceph:v20, name=epic_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 09 16:03:56 compute-0 ceph-mgr[80673]: mgr[py] Loading python module 'test_orchestrator'
Dec 09 16:03:56 compute-0 systemd[1]: Started libpod-conmon-218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142.scope.
Dec 09 16:03:56 compute-0 podman[81828]: 2025-12-09 16:03:56.315162538 +0000 UTC m=+0.024190171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:03:56 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d455c8c09c63f9d892287c942aa92749e98a00a41d34c09de1b65d198eaeb804/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d455c8c09c63f9d892287c942aa92749e98a00a41d34c09de1b65d198eaeb804/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d455c8c09c63f9d892287c942aa92749e98a00a41d34c09de1b65d198eaeb804/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:56 compute-0 podman[81828]: 2025-12-09 16:03:56.452284505 +0000 UTC m=+0.161312178 container init 218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142 (image=quay.io/ceph/ceph:v20, name=epic_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:56 compute-0 podman[81828]: 2025-12-09 16:03:56.4597705 +0000 UTC m=+0.168798133 container start 218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142 (image=quay.io/ceph/ceph:v20, name=epic_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:56 compute-0 podman[81828]: 2025-12-09 16:03:56.462681535 +0000 UTC m=+0.171709148 container attach 218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142 (image=quay.io/ceph/ceph:v20, name=epic_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:03:56 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:03:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:03:56 compute-0 podman[81858]: 2025-12-09 16:03:56.496790739 +0000 UTC m=+0.103684517 container died b3b17105391d4a64fbd08f9c7bbd1c6ae25622203819d0918689b0090c91d47c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:03:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:03:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:03:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d49ab8966897121293d2e72f8b0befb75cb9d4f6511633b5c72c3caf7825cd7-merged.mount: Deactivated successfully.
Dec 09 16:03:56 compute-0 podman[81858]: 2025-12-09 16:03:56.553206551 +0000 UTC m=+0.160100329 container remove b3b17105391d4a64fbd08f9c7bbd1c6ae25622203819d0918689b0090c91d47c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:56 compute-0 bash[81858]: ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-sjfqtt
Dec 09 16:03:56 compute-0 systemd[1]: ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@mgr.compute-0.sjfqtt.service: Main process exited, code=exited, status=143/n/a
Dec 09 16:03:56 compute-0 systemd[1]: ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@mgr.compute-0.sjfqtt.service: Failed with result 'exit-code'.
Dec 09 16:03:56 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.sjfqtt for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:03:56 compute-0 systemd[1]: ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@mgr.compute-0.sjfqtt.service: Consumed 7.232s CPU time, 418.6M memory peak, read 0B from disk, written 176.0K to disk.
Dec 09 16:03:56 compute-0 systemd[1]: Reloading.
Dec 09 16:03:56 compute-0 systemd-rc-local-generator[81966]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:03:56 compute-0 systemd-sysv-generator[81970]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:03:56 compute-0 ceph-mon[75222]: Added host compute-0
Dec 09 16:03:56 compute-0 ceph-mon[75222]: Saving service mon spec with placement compute-0
Dec 09 16:03:56 compute-0 ceph-mon[75222]: Saving service mgr spec with placement compute-0
Dec 09 16:03:56 compute-0 ceph-mon[75222]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 09 16:03:56 compute-0 ceph-mon[75222]: Saving service osd.default_drive_group spec with placement compute-0
Dec 09 16:03:56 compute-0 ceph-mon[75222]: Removing daemon mgr.compute-0.sjfqtt from compute-0 -- ports [8765]
Dec 09 16:03:56 compute-0 ceph-mon[75222]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 09 16:03:57 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863399736' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 09 16:03:57 compute-0 epic_hellman[81864]: 
Dec 09 16:03:57 compute-0 epic_hellman[81864]: {"fsid":"67f67f44-54fc-54ea-8df0-10931b6ecdaf","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":52,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-09T16:03:01:781860+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-09T16:03:01.783688+0000","services":{}},"progress_events":{"6f54e72e-3af3-418a-a633-5c8ea9245333":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 09 16:03:57 compute-0 systemd[1]: libpod-218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142.scope: Deactivated successfully.
Dec 09 16:03:57 compute-0 podman[81828]: 2025-12-09 16:03:57.082990972 +0000 UTC m=+0.792018595 container died 218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142 (image=quay.io/ceph/ceph:v20, name=epic_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 09 16:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d455c8c09c63f9d892287c942aa92749e98a00a41d34c09de1b65d198eaeb804-merged.mount: Deactivated successfully.
Dec 09 16:03:57 compute-0 podman[81828]: 2025-12-09 16:03:57.130902227 +0000 UTC m=+0.839929840 container remove 218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142 (image=quay.io/ceph/ceph:v20, name=epic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:03:57 compute-0 sudo[81749]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:57 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.sjfqtt
Dec 09 16:03:57 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.sjfqtt
Dec 09 16:03:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.sjfqtt"} v 0)
Dec 09 16:03:57 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.sjfqtt"} : dispatch
Dec 09 16:03:57 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.sjfqtt"}]': finished
Dec 09 16:03:57 compute-0 systemd[1]: libpod-conmon-218c2a6add87b77fa02bc0fe80d538dc14c5c357b07e0e804e1fd643ec3cd142.scope: Deactivated successfully.
Dec 09 16:03:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 16:03:57 compute-0 sudo[81804]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:57 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:57 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev 6f54e72e-3af3-418a-a633-5c8ea9245333 (Updating mgr deployment (-1 -> 1))
Dec 09 16:03:57 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 6f54e72e-3af3-418a-a633-5c8ea9245333 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec 09 16:03:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 16:03:57 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:57 compute-0 sudo[81994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:03:57 compute-0 sudo[81994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:57 compute-0 sudo[81994]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:57 compute-0 sudo[82019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:57 compute-0 sudo[82019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:57 compute-0 sudo[82019]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:57 compute-0 sudo[82044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:03:57 compute-0 sudo[82044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:57 compute-0 podman[82111]: 2025-12-09 16:03:57.79920687 +0000 UTC m=+0.054659776 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 16:03:57 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:57 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2863399736' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 09 16:03:57 compute-0 ceph-mon[75222]: Removing key for mgr.compute-0.sjfqtt
Dec 09 16:03:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.sjfqtt"} : dispatch
Dec 09 16:03:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.sjfqtt"}]': finished
Dec 09 16:03:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:57 compute-0 podman[82111]: 2025-12-09 16:03:57.917270616 +0000 UTC m=+0.172723532 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:03:58 compute-0 sudo[82044]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:03:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:03:58 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:58 compute-0 sudo[82202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:03:58 compute-0 sudo[82202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:58 compute-0 sudo[82202]: pam_unix(sudo:session): session closed for user root
Dec 09 16:03:58 compute-0 sudo[82227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:03:58 compute-0 sudo[82227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:03:58 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:03:58 compute-0 podman[82264]: 2025-12-09 16:03:58.762882021 +0000 UTC m=+0.056083523 container create 994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:03:58 compute-0 systemd[1]: Started libpod-conmon-994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45.scope.
Dec 09 16:03:58 compute-0 podman[82264]: 2025-12-09 16:03:58.736633223 +0000 UTC m=+0.029834795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:03:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:58 compute-0 podman[82264]: 2025-12-09 16:03:58.861070827 +0000 UTC m=+0.154272319 container init 994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:58 compute-0 podman[82264]: 2025-12-09 16:03:58.871393074 +0000 UTC m=+0.164594566 container start 994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:03:58 compute-0 podman[82264]: 2025-12-09 16:03:58.876059047 +0000 UTC m=+0.169260539 container attach 994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mayer, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:03:58 compute-0 elegant_mayer[82280]: 167 167
Dec 09 16:03:58 compute-0 systemd[1]: libpod-994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45.scope: Deactivated successfully.
Dec 09 16:03:58 compute-0 conmon[82280]: conmon 994c9e1734d2e5c635bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45.scope/container/memory.events
Dec 09 16:03:58 compute-0 podman[82264]: 2025-12-09 16:03:58.880235473 +0000 UTC m=+0.173436965 container died 994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mayer, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:03:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-63d46776b532eab6801acd40fa82a87fd7fb808c2f5f40822585f9794581d788-merged.mount: Deactivated successfully.
Dec 09 16:03:58 compute-0 podman[82264]: 2025-12-09 16:03:58.927437224 +0000 UTC m=+0.220638726 container remove 994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:03:58 compute-0 systemd[1]: libpod-conmon-994c9e1734d2e5c635bc74f19b9b50ae07428d93e83ed13dfe953cea9e5a6c45.scope: Deactivated successfully.
Dec 09 16:03:59 compute-0 podman[82304]: 2025-12-09 16:03:59.117744979 +0000 UTC m=+0.043458660 container create ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:59 compute-0 systemd[1]: Started libpod-conmon-ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3.scope.
Dec 09 16:03:59 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff0569dac27b2ff55ceb6c2e3ffdceeae301992733437a9e7f0d8505f3c3fba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff0569dac27b2ff55ceb6c2e3ffdceeae301992733437a9e7f0d8505f3c3fba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff0569dac27b2ff55ceb6c2e3ffdceeae301992733437a9e7f0d8505f3c3fba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff0569dac27b2ff55ceb6c2e3ffdceeae301992733437a9e7f0d8505f3c3fba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff0569dac27b2ff55ceb6c2e3ffdceeae301992733437a9e7f0d8505f3c3fba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:03:59 compute-0 podman[82304]: 2025-12-09 16:03:59.100883129 +0000 UTC m=+0.026596830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:03:59 compute-0 podman[82304]: 2025-12-09 16:03:59.212924877 +0000 UTC m=+0.138638578 container init ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:03:59 compute-0 podman[82304]: 2025-12-09 16:03:59.224002549 +0000 UTC m=+0.149716280 container start ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ardinghelli, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:03:59 compute-0 podman[82304]: 2025-12-09 16:03:59.22829861 +0000 UTC m=+0.154012331 container attach ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:03:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:03:59 compute-0 ceph-mon[75222]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:03:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:03:59 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 5f4f01e5-fa0f-4477-b4bb-353e06b17907
Dec 09 16:04:00 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907"} v 0)
Dec 09 16:04:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2918795119' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907"} : dispatch
Dec 09 16:04:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 09 16:04:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2918795119' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907"}]': finished
Dec 09 16:04:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 09 16:04:00 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 09 16:04:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:00 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:00 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:00 compute-0 lvm[82415]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:04:00 compute-0 lvm[82415]: VG ceph_vg0 finished
Dec 09 16:04:00 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 09 16:04:01 compute-0 ceph-mon[75222]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:01 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2918795119' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907"} : dispatch
Dec 09 16:04:01 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2918795119' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907"}]': finished
Dec 09 16:04:01 compute-0 ceph-mon[75222]: osdmap e4: 1 total, 0 up, 1 in
Dec 09 16:04:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 09 16:04:01 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1396797522' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 09 16:04:01 compute-0 angry_ardinghelli[82321]:  stderr: got monmap epoch 1
Dec 09 16:04:01 compute-0 angry_ardinghelli[82321]: --> Creating keyring file for osd.0
Dec 09 16:04:01 compute-0 ceph-mgr[75515]: [progress INFO root] Writing back 3 completed events
Dec 09 16:04:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 16:04:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:01 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 09 16:04:01 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 09 16:04:01 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 5f4f01e5-fa0f-4477-b4bb-353e06b17907 --setuser ceph --setgroup ceph
Dec 09 16:04:01 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:02 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1396797522' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 09 16:04:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]:  stderr: 2025-12-09T16:04:01.599+0000 7f74cb12c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]:  stderr: 2025-12-09T16:04:01.624+0000 7f74cb12c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 09 16:04:02 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:02 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 09 16:04:02 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:02 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 40156d55-4083-4945-ba83-3b1dee6eabbb
Dec 09 16:04:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "40156d55-4083-4945-ba83-3b1dee6eabbb"} v 0)
Dec 09 16:04:02 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3064669221' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "40156d55-4083-4945-ba83-3b1dee6eabbb"} : dispatch
Dec 09 16:04:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 09 16:04:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:03 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3064669221' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "40156d55-4083-4945-ba83-3b1dee6eabbb"}]': finished
Dec 09 16:04:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 09 16:04:03 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 09 16:04:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:03 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:03 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:03 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:03 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:03 compute-0 lvm[83368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:04:03 compute-0 lvm[83368]: VG ceph_vg1 finished
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 09 16:04:03 compute-0 ceph-mon[75222]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:03 compute-0 ceph-mon[75222]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 09 16:04:03 compute-0 ceph-mon[75222]: Cluster is now healthy
Dec 09 16:04:03 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3064669221' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "40156d55-4083-4945-ba83-3b1dee6eabbb"} : dispatch
Dec 09 16:04:03 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3064669221' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "40156d55-4083-4945-ba83-3b1dee6eabbb"}]': finished
Dec 09 16:04:03 compute-0 ceph-mon[75222]: osdmap e5: 2 total, 0 up, 2 in
Dec 09 16:04:03 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:03 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 09 16:04:03 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338300711' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]:  stderr: got monmap epoch 1
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: --> Creating keyring file for osd.1
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 09 16:04:03 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 40156d55-4083-4945-ba83-3b1dee6eabbb --setuser ceph --setgroup ceph
Dec 09 16:04:03 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:04 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3338300711' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 09 16:04:04 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]:  stderr: 2025-12-09T16:04:03.867+0000 7fb0b4cd88c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]:  stderr: 2025-12-09T16:04:03.899+0000 7fb0b4cd88c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:04 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 243996ad-36e8-4855-a1e1-ac93cfca0f40
Dec 09 16:04:05 compute-0 ceph-mon[75222]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "243996ad-36e8-4855-a1e1-ac93cfca0f40"} v 0)
Dec 09 16:04:05 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3108996035' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "243996ad-36e8-4855-a1e1-ac93cfca0f40"} : dispatch
Dec 09 16:04:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 09 16:04:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:05 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3108996035' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "243996ad-36e8-4855-a1e1-ac93cfca0f40"}]': finished
Dec 09 16:04:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 09 16:04:05 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 09 16:04:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:05 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:05 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:05 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:05 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:05 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:05 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:05 compute-0 lvm[84331]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:05 compute-0 lvm[84331]: VG ceph_vg2 finished
Dec 09 16:04:05 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 09 16:04:05 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 09 16:04:05 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 09 16:04:05 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:05 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 09 16:04:05 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 09 16:04:06 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1200399013' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 09 16:04:06 compute-0 angry_ardinghelli[82321]:  stderr: got monmap epoch 1
Dec 09 16:04:06 compute-0 angry_ardinghelli[82321]: --> Creating keyring file for osd.2
Dec 09 16:04:06 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 09 16:04:06 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 09 16:04:06 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 243996ad-36e8-4855-a1e1-ac93cfca0f40 --setuser ceph --setgroup ceph
Dec 09 16:04:06 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3108996035' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "243996ad-36e8-4855-a1e1-ac93cfca0f40"} : dispatch
Dec 09 16:04:06 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3108996035' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "243996ad-36e8-4855-a1e1-ac93cfca0f40"}]': finished
Dec 09 16:04:06 compute-0 ceph-mon[75222]: osdmap e6: 3 total, 0 up, 3 in
Dec 09 16:04:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:06 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1200399013' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 09 16:04:06 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:06 compute-0 angry_ardinghelli[82321]:  stderr: 2025-12-09T16:04:06.182+0000 7f76152688c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec 09 16:04:06 compute-0 angry_ardinghelli[82321]:  stderr: 2025-12-09T16:04:06.213+0000 7f76152688c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 09 16:04:06 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 09 16:04:07 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 09 16:04:07 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 09 16:04:07 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:07 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:07 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 09 16:04:07 compute-0 angry_ardinghelli[82321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 09 16:04:07 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 09 16:04:07 compute-0 angry_ardinghelli[82321]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 09 16:04:07 compute-0 systemd[1]: libpod-ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3.scope: Deactivated successfully.
Dec 09 16:04:07 compute-0 systemd[1]: libpod-ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3.scope: Consumed 6.394s CPU time.
Dec 09 16:04:07 compute-0 conmon[82321]: conmon ef8e8307edee609fbef8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3.scope/container/memory.events
Dec 09 16:04:07 compute-0 podman[82304]: 2025-12-09 16:04:07.186631929 +0000 UTC m=+8.112345610 container died ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fff0569dac27b2ff55ceb6c2e3ffdceeae301992733437a9e7f0d8505f3c3fba-merged.mount: Deactivated successfully.
Dec 09 16:04:07 compute-0 podman[82304]: 2025-12-09 16:04:07.229924583 +0000 UTC m=+8.155638264 container remove ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:04:07 compute-0 systemd[1]: libpod-conmon-ef8e8307edee609fbef8c83268c9b91f546f4e36bd68dda16dac7f88f561c6a3.scope: Deactivated successfully.
Dec 09 16:04:07 compute-0 sudo[82227]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:07 compute-0 sudo[85266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:07 compute-0 sudo[85266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:07 compute-0 sudo[85266]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:07 compute-0 sudo[85291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:04:07 compute-0 sudo[85291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:07 compute-0 ceph-mon[75222]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:07 compute-0 podman[85328]: 2025-12-09 16:04:07.663862084 +0000 UTC m=+0.036273916 container create 2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:04:07 compute-0 systemd[1]: Started libpod-conmon-2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8.scope.
Dec 09 16:04:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:07 compute-0 podman[85328]: 2025-12-09 16:04:07.743574037 +0000 UTC m=+0.115985879 container init 2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_carson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:04:07 compute-0 podman[85328]: 2025-12-09 16:04:07.648330447 +0000 UTC m=+0.020742299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:07 compute-0 podman[85328]: 2025-12-09 16:04:07.752044024 +0000 UTC m=+0.124455856 container start 2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 09 16:04:07 compute-0 podman[85328]: 2025-12-09 16:04:07.755376823 +0000 UTC m=+0.127788685 container attach 2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_carson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:07 compute-0 vigorous_carson[85344]: 167 167
Dec 09 16:04:07 compute-0 systemd[1]: libpod-2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8.scope: Deactivated successfully.
Dec 09 16:04:07 compute-0 conmon[85344]: conmon 2f54c38a5114c99d30bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8.scope/container/memory.events
Dec 09 16:04:07 compute-0 podman[85328]: 2025-12-09 16:04:07.760684106 +0000 UTC m=+0.133095938 container died 2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-064c6b513c575380ac3520812e6ef8abd99fa08978beaf9a6d920118275ebdc2-merged.mount: Deactivated successfully.
Dec 09 16:04:07 compute-0 podman[85328]: 2025-12-09 16:04:07.790986346 +0000 UTC m=+0.163398178 container remove 2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:07 compute-0 systemd[1]: libpod-conmon-2f54c38a5114c99d30bc2fc7a08508747c2cf3306d8c5199aabb283dca627bc8.scope: Deactivated successfully.
Dec 09 16:04:07 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:07 compute-0 podman[85369]: 2025-12-09 16:04:07.979368777 +0000 UTC m=+0.055695219 container create 50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:04:08 compute-0 systemd[1]: Started libpod-conmon-50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17.scope.
Dec 09 16:04:08 compute-0 podman[85369]: 2025-12-09 16:04:07.95159088 +0000 UTC m=+0.027917362 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:08 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11261a0490d95d470991ba705c4639bac1b4e5488bd3fab50e7afdd9552eb0ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11261a0490d95d470991ba705c4639bac1b4e5488bd3fab50e7afdd9552eb0ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11261a0490d95d470991ba705c4639bac1b4e5488bd3fab50e7afdd9552eb0ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11261a0490d95d470991ba705c4639bac1b4e5488bd3fab50e7afdd9552eb0ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:08 compute-0 podman[85369]: 2025-12-09 16:04:08.077407599 +0000 UTC m=+0.153734041 container init 50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_beaver, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:08 compute-0 podman[85369]: 2025-12-09 16:04:08.094773896 +0000 UTC m=+0.171100298 container start 50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:08 compute-0 podman[85369]: 2025-12-09 16:04:08.10069859 +0000 UTC m=+0.177025002 container attach 50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]: {
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:     "0": [
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:         {
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "devices": [
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "/dev/loop3"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             ],
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_name": "ceph_lv0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_size": "21470642176",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "name": "ceph_lv0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "tags": {
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.crush_device_class": "",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.encrypted": "0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osd_id": "0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.type": "block",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.vdo": "0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.with_tpm": "0"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             },
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "type": "block",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "vg_name": "ceph_vg0"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:         }
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:     ],
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:     "1": [
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:         {
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "devices": [
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "/dev/loop4"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             ],
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_name": "ceph_lv1",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_size": "21470642176",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "name": "ceph_lv1",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "tags": {
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.crush_device_class": "",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.encrypted": "0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osd_id": "1",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.type": "block",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.vdo": "0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.with_tpm": "0"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             },
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "type": "block",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "vg_name": "ceph_vg1"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:         }
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:     ],
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:     "2": [
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:         {
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "devices": [
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "/dev/loop5"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             ],
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_name": "ceph_lv2",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_size": "21470642176",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "name": "ceph_lv2",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "tags": {
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.crush_device_class": "",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.encrypted": "0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osd_id": "2",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.type": "block",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.vdo": "0",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:                 "ceph.with_tpm": "0"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             },
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "type": "block",
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:             "vg_name": "ceph_vg2"
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:         }
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]:     ]
Dec 09 16:04:08 compute-0 compassionate_beaver[85386]: }
Dec 09 16:04:08 compute-0 systemd[1]: libpod-50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17.scope: Deactivated successfully.
Dec 09 16:04:08 compute-0 podman[85369]: 2025-12-09 16:04:08.450973057 +0000 UTC m=+0.527299469 container died 50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-11261a0490d95d470991ba705c4639bac1b4e5488bd3fab50e7afdd9552eb0ff-merged.mount: Deactivated successfully.
Dec 09 16:04:08 compute-0 podman[85369]: 2025-12-09 16:04:08.494457777 +0000 UTC m=+0.570784179 container remove 50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:04:08 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:08 compute-0 systemd[1]: libpod-conmon-50364d6cb86b57f4dd29e1962a3c4a8def13e9a0df6fcdf73260f75991743d17.scope: Deactivated successfully.
Dec 09 16:04:08 compute-0 sudo[85291]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 09 16:04:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 09 16:04:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:04:08 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:08 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 09 16:04:08 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 09 16:04:08 compute-0 sudo[85408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:08 compute-0 sudo[85408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:08 compute-0 sudo[85408]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:08 compute-0 sudo[85433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:04:08 compute-0 sudo[85433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:09 compute-0 podman[85500]: 2025-12-09 16:04:09.159475845 +0000 UTC m=+0.059859816 container create 690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:09 compute-0 systemd[1]: Started libpod-conmon-690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79.scope.
Dec 09 16:04:09 compute-0 podman[85500]: 2025-12-09 16:04:09.134955614 +0000 UTC m=+0.035339585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:09 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:09 compute-0 podman[85500]: 2025-12-09 16:04:09.251038555 +0000 UTC m=+0.151422576 container init 690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:09 compute-0 podman[85500]: 2025-12-09 16:04:09.262168968 +0000 UTC m=+0.162552909 container start 690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:04:09 compute-0 podman[85500]: 2025-12-09 16:04:09.26681746 +0000 UTC m=+0.167201521 container attach 690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:09 compute-0 gifted_shirley[85517]: 167 167
Dec 09 16:04:09 compute-0 systemd[1]: libpod-690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79.scope: Deactivated successfully.
Dec 09 16:04:09 compute-0 podman[85500]: 2025-12-09 16:04:09.270779979 +0000 UTC m=+0.171163950 container died 690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 09 16:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb53f77ad127670cb61bd25ab431ab93e78e7950d301b54773e74a33a5f014f2-merged.mount: Deactivated successfully.
Dec 09 16:04:09 compute-0 podman[85500]: 2025-12-09 16:04:09.321608549 +0000 UTC m=+0.221992520 container remove 690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shirley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:09 compute-0 systemd[1]: libpod-conmon-690704a777a27b2cc0c43cdc4406ec15bcfbe215cb1dadf2f88c6c5d7a644d79.scope: Deactivated successfully.
Dec 09 16:04:09 compute-0 ceph-mon[75222]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 09 16:04:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:09 compute-0 podman[85546]: 2025-12-09 16:04:09.672880881 +0000 UTC m=+0.042609963 container create 261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:04:09 compute-0 systemd[1]: Started libpod-conmon-261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8.scope.
Dec 09 16:04:09 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:09 compute-0 podman[85546]: 2025-12-09 16:04:09.656158575 +0000 UTC m=+0.025887697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e91178a005b0d1a19a407041b918cf521de28eae1ba81e6f96b0054d134094b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e91178a005b0d1a19a407041b918cf521de28eae1ba81e6f96b0054d134094b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e91178a005b0d1a19a407041b918cf521de28eae1ba81e6f96b0054d134094b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e91178a005b0d1a19a407041b918cf521de28eae1ba81e6f96b0054d134094b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e91178a005b0d1a19a407041b918cf521de28eae1ba81e6f96b0054d134094b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:09 compute-0 podman[85546]: 2025-12-09 16:04:09.770747877 +0000 UTC m=+0.140476979 container init 261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 16:04:09 compute-0 podman[85546]: 2025-12-09 16:04:09.781386864 +0000 UTC m=+0.151115956 container start 261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:09 compute-0 podman[85546]: 2025-12-09 16:04:09.784873908 +0000 UTC m=+0.154603000 container attach 261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:09 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:09 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test[85563]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 09 16:04:09 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test[85563]:                             [--no-systemd] [--no-tmpfs]
Dec 09 16:04:09 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test[85563]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 09 16:04:09 compute-0 systemd[1]: libpod-261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8.scope: Deactivated successfully.
Dec 09 16:04:09 compute-0 podman[85546]: 2025-12-09 16:04:09.976875008 +0000 UTC m=+0.346604160 container died 261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e91178a005b0d1a19a407041b918cf521de28eae1ba81e6f96b0054d134094b-merged.mount: Deactivated successfully.
Dec 09 16:04:10 compute-0 podman[85546]: 2025-12-09 16:04:10.039015407 +0000 UTC m=+0.408744529 container remove 261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:04:10 compute-0 systemd[1]: libpod-conmon-261c175a8fdd01eceee31247275f35aee5557944a0fb8d7810a9b769e4f6d0f8.scope: Deactivated successfully.
Dec 09 16:04:10 compute-0 systemd[1]: Reloading.
Dec 09 16:04:10 compute-0 ceph-mon[75222]: Deploying daemon osd.0 on compute-0
Dec 09 16:04:10 compute-0 systemd-sysv-generator[85631]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:04:10 compute-0 systemd-rc-local-generator[85627]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:04:10 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:10 compute-0 systemd[1]: Reloading.
Dec 09 16:04:10 compute-0 systemd-rc-local-generator[85668]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:04:10 compute-0 systemd-sysv-generator[85672]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:04:10 compute-0 systemd[1]: Starting Ceph osd.0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:04:11 compute-0 podman[85726]: 2025-12-09 16:04:11.165955999 +0000 UTC m=+0.035899973 container create 5ba5e22a8a26b476ae8a70d25d0e5831d3fa7c84f49f268ccf933c5fdf26caae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814ec209bf99523be1cca8444c0da60504fc657faf8c7a0efe189b43a0a1c2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814ec209bf99523be1cca8444c0da60504fc657faf8c7a0efe189b43a0a1c2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814ec209bf99523be1cca8444c0da60504fc657faf8c7a0efe189b43a0a1c2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814ec209bf99523be1cca8444c0da60504fc657faf8c7a0efe189b43a0a1c2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814ec209bf99523be1cca8444c0da60504fc657faf8c7a0efe189b43a0a1c2e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:11 compute-0 podman[85726]: 2025-12-09 16:04:11.237502816 +0000 UTC m=+0.107446810 container init 5ba5e22a8a26b476ae8a70d25d0e5831d3fa7c84f49f268ccf933c5fdf26caae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:04:11 compute-0 podman[85726]: 2025-12-09 16:04:11.150429062 +0000 UTC m=+0.020373056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:11 compute-0 podman[85726]: 2025-12-09 16:04:11.247043507 +0000 UTC m=+0.116987521 container start 5ba5e22a8a26b476ae8a70d25d0e5831d3fa7c84f49f268ccf933c5fdf26caae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 16:04:11 compute-0 podman[85726]: 2025-12-09 16:04:11.251125431 +0000 UTC m=+0.121069415 container attach 5ba5e22a8a26b476ae8a70d25d0e5831d3fa7c84f49f268ccf933c5fdf26caae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:04:11 compute-0 ceph-mon[75222]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:11 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:11 compute-0 bash[85726]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:11 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:11 compute-0 bash[85726]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:11 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:12 compute-0 lvm[85827]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:04:12 compute-0 lvm[85827]: VG ceph_vg0 finished
Dec 09 16:04:12 compute-0 lvm[85828]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:04:12 compute-0 lvm[85828]: VG ceph_vg1 finished
Dec 09 16:04:12 compute-0 lvm[85830]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:12 compute-0 lvm[85830]: VG ceph_vg2 finished
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:12 compute-0 bash[85726]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 09 16:04:12 compute-0 bash[85726]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:12 compute-0 bash[85726]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 09 16:04:12 compute-0 bash[85726]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 09 16:04:12 compute-0 bash[85726]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 bash[85726]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 bash[85726]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 09 16:04:12 compute-0 bash[85726]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 09 16:04:12 compute-0 bash[85726]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 09 16:04:12 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate[85741]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 09 16:04:12 compute-0 bash[85726]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 09 16:04:12 compute-0 systemd[1]: libpod-5ba5e22a8a26b476ae8a70d25d0e5831d3fa7c84f49f268ccf933c5fdf26caae.scope: Deactivated successfully.
Dec 09 16:04:12 compute-0 systemd[1]: libpod-5ba5e22a8a26b476ae8a70d25d0e5831d3fa7c84f49f268ccf933c5fdf26caae.scope: Consumed 1.536s CPU time.
Dec 09 16:04:12 compute-0 podman[85932]: 2025-12-09 16:04:12.418401499 +0000 UTC m=+0.024866483 container died 5ba5e22a8a26b476ae8a70d25d0e5831d3fa7c84f49f268ccf933c5fdf26caae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6814ec209bf99523be1cca8444c0da60504fc657faf8c7a0efe189b43a0a1c2e-merged.mount: Deactivated successfully.
Dec 09 16:04:12 compute-0 podman[85932]: 2025-12-09 16:04:12.465631601 +0000 UTC m=+0.072096565 container remove 5ba5e22a8a26b476ae8a70d25d0e5831d3fa7c84f49f268ccf933c5fdf26caae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0-activate, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:12 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:12 compute-0 podman[85994]: 2025-12-09 16:04:12.716817034 +0000 UTC m=+0.048510585 container create 012822ae8bedefb05d753efd429cb131456844e02bd9516e891592371e33cae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cc9fed9fbae5f5db81501d3394a8a9c2fa7822b9143e14331fd1f86760c7be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cc9fed9fbae5f5db81501d3394a8a9c2fa7822b9143e14331fd1f86760c7be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cc9fed9fbae5f5db81501d3394a8a9c2fa7822b9143e14331fd1f86760c7be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cc9fed9fbae5f5db81501d3394a8a9c2fa7822b9143e14331fd1f86760c7be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cc9fed9fbae5f5db81501d3394a8a9c2fa7822b9143e14331fd1f86760c7be/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:12 compute-0 podman[85994]: 2025-12-09 16:04:12.69586252 +0000 UTC m=+0.027556121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:12 compute-0 podman[85994]: 2025-12-09 16:04:12.817466051 +0000 UTC m=+0.149159622 container init 012822ae8bedefb05d753efd429cb131456844e02bd9516e891592371e33cae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:12 compute-0 podman[85994]: 2025-12-09 16:04:12.824934775 +0000 UTC m=+0.156628326 container start 012822ae8bedefb05d753efd429cb131456844e02bd9516e891592371e33cae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:12 compute-0 bash[85994]: 012822ae8bedefb05d753efd429cb131456844e02bd9516e891592371e33cae1
Dec 09 16:04:12 compute-0 systemd[1]: Started Ceph osd.0 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:04:12 compute-0 ceph-osd[86013]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:04:12 compute-0 ceph-osd[86013]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 09 16:04:12 compute-0 ceph-osd[86013]: pidfile_write: ignore empty --pid-file
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:12 compute-0 sudo[85433]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 09 16:04:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 09 16:04:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:04:12 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:12 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 09 16:04:12 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:12 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:12 compute-0 sudo[86028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:12 compute-0 sudo[86028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:12 compute-0 sudo[86028]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986400 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:13 compute-0 sudo[86059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4986000 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:13 compute-0 sudo[86059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 09 16:04:13 compute-0 ceph-osd[86013]: load: jerasure load: lrc 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:13 compute-0 ceph-osd[86013]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 09 16:04:13 compute-0 ceph-osd[86013]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e4987c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount shared_bdev_used = 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: RocksDB version: 7.9.2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Git sha 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: DB SUMMARY
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: DB Session ID:  5NT2WUNMSE2YR167F0JB
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: CURRENT file:  CURRENT
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                         Options.error_if_exists: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.create_if_missing: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                                     Options.env: 0x5567e4817ea0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                                Options.info_log: 0x5567e58688a0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                              Options.statistics: (nil)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.use_fsync: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                              Options.db_log_dir: 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                                 Options.wal_dir: db.wal
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.write_buffer_manager: 0x5567e487cb40
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.unordered_write: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.row_cache: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                              Options.wal_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.two_write_queues: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.wal_compression: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.atomic_flush: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.max_background_jobs: 4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.max_background_compactions: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.max_subcompactions: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.max_open_files: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Compression algorithms supported:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kZSTD supported: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kXpressCompression supported: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kBZip2Compression supported: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kLZ4Compression supported: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kZlibCompression supported: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kSnappyCompression supported: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 18ed1e22-81d5-456f-a269-c6499fa10870
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296253224361, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296253226101, "job": 1, "event": "recovery_finished"}
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: freelist init
Dec 09 16:04:13 compute-0 ceph-osd[86013]: freelist _read_cfg
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs umount
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) close
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bdev(0x5567e561d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluefs mount shared_bdev_used = 27262976
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: RocksDB version: 7.9.2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Git sha 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: DB SUMMARY
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: DB Session ID:  5NT2WUNMSE2YR167F0JA
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: CURRENT file:  CURRENT
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                         Options.error_if_exists: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.create_if_missing: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                                     Options.env: 0x5567e4817ce0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                                Options.info_log: 0x5567e5868a40
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                              Options.statistics: (nil)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.use_fsync: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                              Options.db_log_dir: 
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                                 Options.wal_dir: db.wal
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.write_buffer_manager: 0x5567e487d900
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.unordered_write: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.row_cache: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                              Options.wal_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.two_write_queues: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.wal_compression: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.atomic_flush: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.max_background_jobs: 4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.max_background_compactions: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.max_subcompactions: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.max_open_files: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Compression algorithms supported:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kZSTD supported: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kXpressCompression supported: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kBZip2Compression supported: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kLZ4Compression supported: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kZlibCompression supported: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         kSnappyCompression supported: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e5868bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e58690c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e58690c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5567e58690c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5567e481ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 18ed1e22-81d5-456f-a269-c6499fa10870
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296253281357, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296253285218, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296253, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "18ed1e22-81d5-456f-a269-c6499fa10870", "db_session_id": "5NT2WUNMSE2YR167F0JA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296253287877, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296253, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "18ed1e22-81d5-456f-a269-c6499fa10870", "db_session_id": "5NT2WUNMSE2YR167F0JA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296253290143, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296253, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "18ed1e22-81d5-456f-a269-c6499fa10870", "db_session_id": "5NT2WUNMSE2YR167F0JA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296253291480, "job": 1, "event": "recovery_finished"}
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5567e5a4c000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: DB pointer 0x5567e5a22000
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:04:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:04:13 compute-0 ceph-osd[86013]: bluestore.MempoolThread fragmentation_score=0.000124 took=0.000004s
Dec 09 16:04:13 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 09 16:04:13 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 09 16:04:13 compute-0 ceph-osd[86013]: _get_class not permitted to load lua
Dec 09 16:04:13 compute-0 ceph-osd[86013]: _get_class not permitted to load sdk
Dec 09 16:04:13 compute-0 ceph-osd[86013]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 09 16:04:13 compute-0 ceph-osd[86013]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 09 16:04:13 compute-0 ceph-osd[86013]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 09 16:04:13 compute-0 ceph-osd[86013]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 09 16:04:13 compute-0 ceph-osd[86013]: osd.0 0 load_pgs
Dec 09 16:04:13 compute-0 ceph-osd[86013]: osd.0 0 load_pgs opened 0 pgs
Dec 09 16:04:13 compute-0 ceph-osd[86013]: osd.0 0 log_to_monitors true
Dec 09 16:04:13 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0[86009]: 2025-12-09T16:04:13.315+0000 7fa722fdf8c0 -1 osd.0 0 log_to_monitors true
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 09 16:04:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 09 16:04:13 compute-0 ceph-mon[75222]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 09 16:04:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:13 compute-0 ceph-mon[75222]: from='osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 09 16:04:13 compute-0 podman[86553]: 2025-12-09 16:04:13.432830437 +0000 UTC m=+0.038506049 container create 618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:13 compute-0 systemd[1]: Started libpod-conmon-618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255.scope.
Dec 09 16:04:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:13 compute-0 podman[86553]: 2025-12-09 16:04:13.500207967 +0000 UTC m=+0.105883599 container init 618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:04:13 compute-0 podman[86553]: 2025-12-09 16:04:13.506688109 +0000 UTC m=+0.112363721 container start 618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:13 compute-0 podman[86553]: 2025-12-09 16:04:13.509760489 +0000 UTC m=+0.115436101 container attach 618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:13 compute-0 podman[86553]: 2025-12-09 16:04:13.414365764 +0000 UTC m=+0.020041396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:13 compute-0 silly_shirley[86570]: 167 167
Dec 09 16:04:13 compute-0 systemd[1]: libpod-618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255.scope: Deactivated successfully.
Dec 09 16:04:13 compute-0 podman[86553]: 2025-12-09 16:04:13.512121916 +0000 UTC m=+0.117797538 container died 618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d44ab6171316c65512367c0ce1c4df4bc75c227b528dd88313761a7d22e2bff1-merged.mount: Deactivated successfully.
Dec 09 16:04:13 compute-0 podman[86553]: 2025-12-09 16:04:13.543905404 +0000 UTC m=+0.149581026 container remove 618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 09 16:04:13 compute-0 systemd[1]: libpod-conmon-618ce403bd227ee1b2ee056cad1e6834591e1c3f0d1c5af417ab61e3fce08255.scope: Deactivated successfully.
Dec 09 16:04:13 compute-0 podman[86598]: 2025-12-09 16:04:13.809360213 +0000 UTC m=+0.037020070 container create 4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:13 compute-0 systemd[1]: Started libpod-conmon-4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc.scope.
Dec 09 16:04:13 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:13 compute-0 podman[86598]: 2025-12-09 16:04:13.794461216 +0000 UTC m=+0.022121093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30174596e2e4404f788a7cfd8ca04a9811984a09eececcde37d93833672b8127/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30174596e2e4404f788a7cfd8ca04a9811984a09eececcde37d93833672b8127/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30174596e2e4404f788a7cfd8ca04a9811984a09eececcde37d93833672b8127/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30174596e2e4404f788a7cfd8ca04a9811984a09eececcde37d93833672b8127/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30174596e2e4404f788a7cfd8ca04a9811984a09eececcde37d93833672b8127/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:13 compute-0 podman[86598]: 2025-12-09 16:04:13.908441158 +0000 UTC m=+0.136101045 container init 4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 09 16:04:13 compute-0 podman[86598]: 2025-12-09 16:04:13.920654657 +0000 UTC m=+0.148314514 container start 4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:13 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 09 16:04:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:13 compute-0 podman[86598]: 2025-12-09 16:04:13.924784462 +0000 UTC m=+0.152444329 container attach 4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:04:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:13 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:13 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:13 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:14 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test[86614]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 09 16:04:14 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test[86614]:                             [--no-systemd] [--no-tmpfs]
Dec 09 16:04:14 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test[86614]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 09 16:04:14 compute-0 systemd[1]: libpod-4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc.scope: Deactivated successfully.
Dec 09 16:04:14 compute-0 podman[86598]: 2025-12-09 16:04:14.109937769 +0000 UTC m=+0.337597626 container died 4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-30174596e2e4404f788a7cfd8ca04a9811984a09eececcde37d93833672b8127-merged.mount: Deactivated successfully.
Dec 09 16:04:14 compute-0 podman[86598]: 2025-12-09 16:04:14.161213543 +0000 UTC m=+0.388873410 container remove 4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate-test, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:14 compute-0 systemd[1]: libpod-conmon-4bd523b17b309fa430be5c8d25c6af22005d709249033c40f36fbef5d695fdbc.scope: Deactivated successfully.
Dec 09 16:04:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:14 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 09 16:04:14 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 09 16:04:14 compute-0 systemd[1]: Reloading.
Dec 09 16:04:14 compute-0 ceph-mon[75222]: Deploying daemon osd.1 on compute-0
Dec 09 16:04:14 compute-0 ceph-mon[75222]: from='osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 09 16:04:14 compute-0 ceph-mon[75222]: osdmap e7: 3 total, 0 up, 3 in
Dec 09 16:04:14 compute-0 ceph-mon[75222]: from='osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 09 16:04:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:14 compute-0 systemd-rc-local-generator[86680]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:04:14 compute-0 systemd-sysv-generator[86684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:04:14 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:14 compute-0 systemd[1]: Reloading.
Dec 09 16:04:14 compute-0 systemd-rc-local-generator[86720]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:04:14 compute-0 systemd-sysv-generator[86723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:04:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 09 16:04:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 09 16:04:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 09 16:04:14 compute-0 ceph-osd[86013]: osd.0 0 done with init, starting boot process
Dec 09 16:04:14 compute-0 ceph-osd[86013]: osd.0 0 start_boot
Dec 09 16:04:14 compute-0 ceph-osd[86013]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 09 16:04:14 compute-0 ceph-osd[86013]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 09 16:04:14 compute-0 ceph-osd[86013]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 09 16:04:14 compute-0 ceph-osd[86013]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 09 16:04:14 compute-0 ceph-osd[86013]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 09 16:04:14 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 09 16:04:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:14 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:14 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:14 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:14 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:14 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:14 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:14 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2573949666; not ready for session (expect reconnect)
Dec 09 16:04:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:14 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:14 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:14 compute-0 systemd[1]: Starting Ceph osd.1 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:04:15 compute-0 podman[86776]: 2025-12-09 16:04:15.172484848 +0000 UTC m=+0.042618203 container create fec75b6ae786331e0bdcbbf3086851ca653adf2fa1e2b13602ec0ed8426f4087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:04:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131cb34372f5627338d9fbe969a6e266f8953f2c1d97668cc5da085538849593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131cb34372f5627338d9fbe969a6e266f8953f2c1d97668cc5da085538849593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131cb34372f5627338d9fbe969a6e266f8953f2c1d97668cc5da085538849593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:15 compute-0 podman[86776]: 2025-12-09 16:04:15.152218576 +0000 UTC m=+0.022351941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131cb34372f5627338d9fbe969a6e266f8953f2c1d97668cc5da085538849593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131cb34372f5627338d9fbe969a6e266f8953f2c1d97668cc5da085538849593/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:15 compute-0 podman[86776]: 2025-12-09 16:04:15.26440378 +0000 UTC m=+0.134537125 container init fec75b6ae786331e0bdcbbf3086851ca653adf2fa1e2b13602ec0ed8426f4087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 16:04:15 compute-0 podman[86776]: 2025-12-09 16:04:15.272470153 +0000 UTC m=+0.142603498 container start fec75b6ae786331e0bdcbbf3086851ca653adf2fa1e2b13602ec0ed8426f4087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:04:15 compute-0 podman[86776]: 2025-12-09 16:04:15.289800569 +0000 UTC m=+0.159933964 container attach fec75b6ae786331e0bdcbbf3086851ca653adf2fa1e2b13602ec0ed8426f4087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:04:15 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:15 compute-0 ceph-mon[75222]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:15 compute-0 ceph-mon[75222]: from='osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 09 16:04:15 compute-0 ceph-mon[75222]: osdmap e8: 3 total, 0 up, 3 in
Dec 09 16:04:15 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:15 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:15 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:15 compute-0 bash[86776]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:15 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:15 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:15 compute-0 bash[86776]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:15 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:15 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2573949666; not ready for session (expect reconnect)
Dec 09 16:04:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:15 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:15 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:16 compute-0 lvm[86877]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:04:16 compute-0 lvm[86877]: VG ceph_vg1 finished
Dec 09 16:04:16 compute-0 lvm[86880]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:16 compute-0 lvm[86880]: VG ceph_vg2 finished
Dec 09 16:04:16 compute-0 lvm[86876]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:04:16 compute-0 lvm[86876]: VG ceph_vg0 finished
Dec 09 16:04:16 compute-0 lvm[86882]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:16 compute-0 lvm[86882]: VG ceph_vg2 finished
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:16 compute-0 bash[86776]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 09 16:04:16 compute-0 bash[86776]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:16 compute-0 lvm[86884]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:16 compute-0 lvm[86884]: VG ceph_vg2 finished
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:16 compute-0 bash[86776]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 16:04:16 compute-0 bash[86776]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 09 16:04:16 compute-0 bash[86776]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:16 compute-0 bash[86776]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:16 compute-0 bash[86776]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 09 16:04:16 compute-0 bash[86776]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 16:04:16 compute-0 bash[86776]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 16:04:16 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate[86790]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 09 16:04:16 compute-0 bash[86776]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 09 16:04:16 compute-0 systemd[1]: libpod-fec75b6ae786331e0bdcbbf3086851ca653adf2fa1e2b13602ec0ed8426f4087.scope: Deactivated successfully.
Dec 09 16:04:16 compute-0 podman[86776]: 2025-12-09 16:04:16.431709949 +0000 UTC m=+1.301843294 container died fec75b6ae786331e0bdcbbf3086851ca653adf2fa1e2b13602ec0ed8426f4087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:16 compute-0 systemd[1]: libpod-fec75b6ae786331e0bdcbbf3086851ca653adf2fa1e2b13602ec0ed8426f4087.scope: Consumed 1.643s CPU time.
Dec 09 16:04:16 compute-0 ceph-mon[75222]: purged_snaps scrub starts
Dec 09 16:04:16 compute-0 ceph-mon[75222]: purged_snaps scrub ok
Dec 09 16:04:16 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-131cb34372f5627338d9fbe969a6e266f8953f2c1d97668cc5da085538849593-merged.mount: Deactivated successfully.
Dec 09 16:04:16 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:16 compute-0 podman[86776]: 2025-12-09 16:04:16.535754147 +0000 UTC m=+1.405887492 container remove fec75b6ae786331e0bdcbbf3086851ca653adf2fa1e2b13602ec0ed8426f4087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:16 compute-0 podman[87036]: 2025-12-09 16:04:16.752211865 +0000 UTC m=+0.060875219 container create 187eb50611d2ddb1b41d9735d4d6534119cb949c4865c9c5b94e777c53253f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:16 compute-0 podman[87036]: 2025-12-09 16:04:16.714256576 +0000 UTC m=+0.022919980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:16 compute-0 sshd-session[86986]: Invalid user admin from 146.190.31.45 port 54540
Dec 09 16:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8596c538902bc4535dff82b7a8a22981e72d70123b2e43b209e873dedc5199d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8596c538902bc4535dff82b7a8a22981e72d70123b2e43b209e873dedc5199d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8596c538902bc4535dff82b7a8a22981e72d70123b2e43b209e873dedc5199d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8596c538902bc4535dff82b7a8a22981e72d70123b2e43b209e873dedc5199d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8596c538902bc4535dff82b7a8a22981e72d70123b2e43b209e873dedc5199d1/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:16 compute-0 podman[87036]: 2025-12-09 16:04:16.841068897 +0000 UTC m=+0.149732281 container init 187eb50611d2ddb1b41d9735d4d6534119cb949c4865c9c5b94e777c53253f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:04:16 compute-0 podman[87036]: 2025-12-09 16:04:16.850548977 +0000 UTC m=+0.159212341 container start 187eb50611d2ddb1b41d9735d4d6534119cb949c4865c9c5b94e777c53253f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:04:16 compute-0 bash[87036]: 187eb50611d2ddb1b41d9735d4d6534119cb949c4865c9c5b94e777c53253f5a
Dec 09 16:04:16 compute-0 systemd[1]: Started Ceph osd.1 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:04:16 compute-0 sshd-session[86986]: Connection closed by invalid user admin 146.190.31.45 port 54540 [preauth]
Dec 09 16:04:16 compute-0 ceph-osd[87055]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:04:16 compute-0 ceph-osd[87055]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 09 16:04:16 compute-0 ceph-osd[87055]: pidfile_write: ignore empty --pid-file
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:16 compute-0 sudo[86059]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:16 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2573949666; not ready for session (expect reconnect)
Dec 09 16:04:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:16 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:16 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 09 16:04:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 09 16:04:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:04:16 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:16 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 09 16:04:16 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:16 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8400 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 sudo[87071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:17 compute-0 sudo[87071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:17 compute-0 sudo[87071]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff8000 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 ceph-osd[87055]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 09 16:04:17 compute-0 ceph-osd[87055]: load: jerasure load: lrc 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 sudo[87103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:04:17 compute-0 sudo[87103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 09 16:04:17 compute-0 ceph-osd[87055]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564027ff9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount shared_bdev_used = 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: RocksDB version: 7.9.2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Git sha 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: DB SUMMARY
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: DB Session ID:  23SW4XWEFX5C5OTD4JEJ
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: CURRENT file:  CURRENT
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                         Options.error_if_exists: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.create_if_missing: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                                     Options.env: 0x564027e89ea0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                                Options.info_log: 0x564028f1a8a0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                              Options.statistics: (nil)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.use_fsync: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                              Options.db_log_dir: 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                                 Options.wal_dir: db.wal
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.write_buffer_manager: 0x564027eeeb40
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.unordered_write: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.row_cache: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                              Options.wal_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.two_write_queues: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.wal_compression: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.atomic_flush: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.max_background_jobs: 4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.max_background_compactions: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.max_subcompactions: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.max_open_files: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Compression algorithms supported:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kZSTD supported: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kXpressCompression supported: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kBZip2Compression supported: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kLZ4Compression supported: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kZlibCompression supported: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kSnappyCompression supported: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b075635-79fd-4446-94ae-4cb35991cd24
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296257273770, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296257275880, "job": 1, "event": "recovery_finished"}
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: freelist init
Dec 09 16:04:17 compute-0 ceph-osd[87055]: freelist _read_cfg
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs umount
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bdev(0x564028c99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluefs mount shared_bdev_used = 27262976
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: RocksDB version: 7.9.2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Git sha 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: DB SUMMARY
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: DB Session ID:  23SW4XWEFX5C5OTD4JEI
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: CURRENT file:  CURRENT
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                         Options.error_if_exists: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.create_if_missing: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                                     Options.env: 0x564027e89ce0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                                Options.info_log: 0x564028f1a960
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                              Options.statistics: (nil)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.use_fsync: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                              Options.db_log_dir: 
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                                 Options.wal_dir: db.wal
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.write_buffer_manager: 0x564027eeeb40
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.unordered_write: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.row_cache: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                              Options.wal_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.two_write_queues: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.wal_compression: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.atomic_flush: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.max_background_jobs: 4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.max_background_compactions: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.max_subcompactions: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.max_open_files: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Compression algorithms supported:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kZSTD supported: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kXpressCompression supported: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kBZip2Compression supported: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kLZ4Compression supported: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kZlibCompression supported: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         kSnappyCompression supported: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1abc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1abc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1abc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1abc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1abc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1abc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1abc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1b0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1b0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564028f1b0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564027e8da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b075635-79fd-4446-94ae-4cb35991cd24
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296257329949, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296257348762, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296257, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b075635-79fd-4446-94ae-4cb35991cd24", "db_session_id": "23SW4XWEFX5C5OTD4JEI", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296257367624, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296257, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b075635-79fd-4446-94ae-4cb35991cd24", "db_session_id": "23SW4XWEFX5C5OTD4JEI", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296257391091, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296257, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b075635-79fd-4446-94ae-4cb35991cd24", "db_session_id": "23SW4XWEFX5C5OTD4JEI", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296257392776, "job": 1, "event": "recovery_finished"}
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564028f40000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: DB pointer 0x5640290d4000
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:04:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:04:17 compute-0 ceph-osd[87055]: bluestore.MempoolThread fragmentation_score=0.000124 took=0.000005s
Dec 09 16:04:17 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 09 16:04:17 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 09 16:04:17 compute-0 ceph-osd[87055]: _get_class not permitted to load lua
Dec 09 16:04:17 compute-0 ceph-osd[87055]: _get_class not permitted to load sdk
Dec 09 16:04:17 compute-0 ceph-osd[87055]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 09 16:04:17 compute-0 ceph-osd[87055]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 09 16:04:17 compute-0 ceph-osd[87055]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 09 16:04:17 compute-0 ceph-osd[87055]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 09 16:04:17 compute-0 ceph-osd[87055]: osd.1 0 load_pgs
Dec 09 16:04:17 compute-0 ceph-osd[87055]: osd.1 0 load_pgs opened 0 pgs
Dec 09 16:04:17 compute-0 ceph-osd[87055]: osd.1 0 log_to_monitors true
Dec 09 16:04:17 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1[87051]: 2025-12-09T16:04:17.481+0000 7f03f5e8b8c0 -1 osd.1 0 log_to_monitors true
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 09 16:04:17 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 09 16:04:17 compute-0 ceph-mon[75222]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 09 16:04:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:17 compute-0 podman[87597]: 2025-12-09 16:04:17.577044391 +0000 UTC m=+0.054424538 container create 1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cannon, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:04:17 compute-0 systemd[1]: Started libpod-conmon-1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7.scope.
Dec 09 16:04:17 compute-0 podman[87597]: 2025-12-09 16:04:17.548767828 +0000 UTC m=+0.026148015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:17 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:17 compute-0 podman[87597]: 2025-12-09 16:04:17.683613462 +0000 UTC m=+0.160993649 container init 1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cannon, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 09 16:04:17 compute-0 podman[87597]: 2025-12-09 16:04:17.690575669 +0000 UTC m=+0.167955806 container start 1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cannon, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:04:17 compute-0 boring_cannon[87614]: 167 167
Dec 09 16:04:17 compute-0 systemd[1]: libpod-1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7.scope: Deactivated successfully.
Dec 09 16:04:17 compute-0 podman[87597]: 2025-12-09 16:04:17.703837652 +0000 UTC m=+0.181217839 container attach 1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:17 compute-0 podman[87597]: 2025-12-09 16:04:17.704353349 +0000 UTC m=+0.181733496 container died 1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cannon, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-85628931b09ee69ffbf0771ddf401eb47622de08192d9b9910843495baa615dd-merged.mount: Deactivated successfully.
Dec 09 16:04:17 compute-0 podman[87597]: 2025-12-09 16:04:17.787529975 +0000 UTC m=+0.264910142 container remove 1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:04:17 compute-0 systemd[1]: libpod-conmon-1c6732bc68a50f58942e250e2761176739b06e13f8cbba1199ad8be0f9daceb7.scope: Deactivated successfully.
Dec 09 16:04:17 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:17 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2573949666; not ready for session (expect reconnect)
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:17 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:17 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:17 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Dec 09 16:04:17 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 09 16:04:17 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:17 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:17 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:17 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:17 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:17 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:17 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:18 compute-0 podman[87644]: 2025-12-09 16:04:18.063147546 +0000 UTC m=+0.043175291 container create 7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.362 iops: 8284.731 elapsed_sec: 0.362
Dec 09 16:04:18 compute-0 ceph-osd[86013]: log_channel(cluster) log [WRN] : OSD bench result of 8284.730822 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 16:04:18 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0[86009]: 2025-12-09T16:04:18.080+0000 7fa71ef61640 -1 osd.0 0 waiting for initial osdmap
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 0 waiting for initial osdmap
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 9 set_numa_affinity not setting numa affinity
Dec 09 16:04:18 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-0[86009]: 2025-12-09T16:04:18.104+0000 7fa719d66640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 09 16:04:18 compute-0 ceph-osd[86013]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 09 16:04:18 compute-0 systemd[1]: Started libpod-conmon-7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc.scope.
Dec 09 16:04:18 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f211f039b4a7490eb217601566c83772541dd42a0745e5566a4750250a73a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:18 compute-0 podman[87644]: 2025-12-09 16:04:18.042266944 +0000 UTC m=+0.022294699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f211f039b4a7490eb217601566c83772541dd42a0745e5566a4750250a73a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f211f039b4a7490eb217601566c83772541dd42a0745e5566a4750250a73a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f211f039b4a7490eb217601566c83772541dd42a0745e5566a4750250a73a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f211f039b4a7490eb217601566c83772541dd42a0745e5566a4750250a73a9/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:18 compute-0 podman[87644]: 2025-12-09 16:04:18.172019701 +0000 UTC m=+0.152047456 container init 7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:18 compute-0 podman[87644]: 2025-12-09 16:04:18.179421303 +0000 UTC m=+0.159449038 container start 7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:04:18 compute-0 podman[87644]: 2025-12-09 16:04:18.183072402 +0000 UTC m=+0.163100137 container attach 7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Dec 09 16:04:18 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test[87660]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 09 16:04:18 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test[87660]:                             [--no-systemd] [--no-tmpfs]
Dec 09 16:04:18 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test[87660]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 09 16:04:18 compute-0 systemd[1]: libpod-7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc.scope: Deactivated successfully.
Dec 09 16:04:18 compute-0 podman[87644]: 2025-12-09 16:04:18.3590906 +0000 UTC m=+0.339118335 container died 7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:04:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-02f211f039b4a7490eb217601566c83772541dd42a0745e5566a4750250a73a9-merged.mount: Deactivated successfully.
Dec 09 16:04:18 compute-0 podman[87644]: 2025-12-09 16:04:18.414426728 +0000 UTC m=+0.394454463 container remove 7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate-test, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:04:18 compute-0 systemd[1]: libpod-conmon-7a34a5dff3a488534c510d7ac7eec2c2e4ca6db8f88f1d78964ec6939f5816dc.scope: Deactivated successfully.
Dec 09 16:04:18 compute-0 ceph-mgr[75515]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 09 16:04:18 compute-0 ceph-mon[75222]: Deploying daemon osd.2 on compute-0
Dec 09 16:04:18 compute-0 ceph-mon[75222]: from='osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 09 16:04:18 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:18 compute-0 ceph-mon[75222]: from='osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 09 16:04:18 compute-0 ceph-mon[75222]: osdmap e9: 3 total, 0 up, 3 in
Dec 09 16:04:18 compute-0 ceph-mon[75222]: from='osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 09 16:04:18 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:18 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:18 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:18 compute-0 ceph-mon[75222]: OSD bench result of 8284.730822 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 16:04:18 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 09 16:04:18 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 09 16:04:18 compute-0 systemd[1]: Reloading.
Dec 09 16:04:18 compute-0 systemd-sysv-generator[87723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:04:18 compute-0 systemd-rc-local-generator[87720]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:04:18 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2573949666; not ready for session (expect reconnect)
Dec 09 16:04:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:18 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:18 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 16:04:18 compute-0 systemd[1]: Reloading.
Dec 09 16:04:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 09 16:04:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:18 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 09 16:04:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec 09 16:04:18 compute-0 ceph-osd[87055]: osd.1 0 done with init, starting boot process
Dec 09 16:04:18 compute-0 ceph-osd[87055]: osd.1 0 start_boot
Dec 09 16:04:18 compute-0 ceph-osd[87055]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 09 16:04:18 compute-0 ceph-osd[87055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 09 16:04:18 compute-0 ceph-osd[87055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 09 16:04:18 compute-0 ceph-osd[87055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 09 16:04:18 compute-0 ceph-osd[87055]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 09 16:04:18 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666] boot
Dec 09 16:04:18 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec 09 16:04:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 16:04:18 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:18 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:18 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:18 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:18 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:18 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3995149440; not ready for session (expect reconnect)
Dec 09 16:04:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:18 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:18 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:19 compute-0 systemd-sysv-generator[87765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:04:19 compute-0 ceph-osd[86013]: osd.0 10 state: booting -> active
Dec 09 16:04:19 compute-0 systemd-rc-local-generator[87761]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:04:19 compute-0 systemd[1]: Starting Ceph osd.2 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:04:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:19 compute-0 ceph-mon[75222]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:19 compute-0 ceph-mon[75222]: from='osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 09 16:04:19 compute-0 ceph-mon[75222]: osd.0 [v2:192.168.122.100:6802/2573949666,v1:192.168.122.100:6803/2573949666] boot
Dec 09 16:04:19 compute-0 ceph-mon[75222]: osdmap e10: 3 total, 1 up, 3 in
Dec 09 16:04:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 09 16:04:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:19 compute-0 podman[87821]: 2025-12-09 16:04:19.557356041 +0000 UTC m=+0.054438669 container create 086e2b8fe61be60ddbc454dd2c36170e6a9f27e3538c521edd435b11d6c1bf91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 16:04:19 compute-0 podman[87821]: 2025-12-09 16:04:19.531063542 +0000 UTC m=+0.028146240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:19 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1255b3c86ee9ccec416b919f7cc33b1b655a0238cb42ffed3992f1d3c301c39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1255b3c86ee9ccec416b919f7cc33b1b655a0238cb42ffed3992f1d3c301c39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1255b3c86ee9ccec416b919f7cc33b1b655a0238cb42ffed3992f1d3c301c39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1255b3c86ee9ccec416b919f7cc33b1b655a0238cb42ffed3992f1d3c301c39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1255b3c86ee9ccec416b919f7cc33b1b655a0238cb42ffed3992f1d3c301c39/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:19 compute-0 podman[87821]: 2025-12-09 16:04:19.725674987 +0000 UTC m=+0.222757655 container init 086e2b8fe61be60ddbc454dd2c36170e6a9f27e3538c521edd435b11d6c1bf91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:04:19 compute-0 podman[87821]: 2025-12-09 16:04:19.740220282 +0000 UTC m=+0.237302930 container start 086e2b8fe61be60ddbc454dd2c36170e6a9f27e3538c521edd435b11d6c1bf91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:19 compute-0 podman[87821]: 2025-12-09 16:04:19.748858124 +0000 UTC m=+0.245940832 container attach 086e2b8fe61be60ddbc454dd2c36170e6a9f27e3538c521edd435b11d6c1bf91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:04:19 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:19 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:19 compute-0 bash[87821]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:19 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:19 compute-0 bash[87821]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 09 16:04:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 09 16:04:19 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3995149440; not ready for session (expect reconnect)
Dec 09 16:04:19 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 09 16:04:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:19 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:19 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:19 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:19 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:20 compute-0 ceph-mgr[75515]: [devicehealth INFO root] creating mgr pool
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 09 16:04:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 09 16:04:20 compute-0 lvm[87923]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:04:20 compute-0 lvm[87923]: VG ceph_vg1 finished
Dec 09 16:04:20 compute-0 lvm[87922]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:04:20 compute-0 lvm[87922]: VG ceph_vg0 finished
Dec 09 16:04:20 compute-0 lvm[87926]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:20 compute-0 lvm[87926]: VG ceph_vg2 finished
Dec 09 16:04:20 compute-0 lvm[87927]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:20 compute-0 lvm[87927]: VG ceph_vg2 finished
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:20 compute-0 bash[87821]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 09 16:04:20 compute-0 bash[87821]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:20 compute-0 bash[87821]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 09 16:04:20 compute-0 bash[87821]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 09 16:04:20 compute-0 bash[87821]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:20 compute-0 bash[87821]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:20 compute-0 bash[87821]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 09 16:04:20 compute-0 bash[87821]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 09 16:04:20 compute-0 bash[87821]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 09 16:04:20 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate[87836]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 09 16:04:20 compute-0 bash[87821]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 09 16:04:20 compute-0 systemd[1]: libpod-086e2b8fe61be60ddbc454dd2c36170e6a9f27e3538c521edd435b11d6c1bf91.scope: Deactivated successfully.
Dec 09 16:04:20 compute-0 systemd[1]: libpod-086e2b8fe61be60ddbc454dd2c36170e6a9f27e3538c521edd435b11d6c1bf91.scope: Consumed 1.656s CPU time.
Dec 09 16:04:20 compute-0 podman[87821]: 2025-12-09 16:04:20.960208923 +0000 UTC m=+1.457291581 container died 086e2b8fe61be60ddbc454dd2c36170e6a9f27e3538c521edd435b11d6c1bf91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 16:04:20 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3995149440; not ready for session (expect reconnect)
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:20 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:20 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Dec 09 16:04:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Dec 09 16:04:20 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec 09 16:04:21 compute-0 ceph-osd[86013]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 09 16:04:21 compute-0 ceph-osd[86013]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 09 16:04:21 compute-0 ceph-osd[86013]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 09 16:04:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:21 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:21 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 09 16:04:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 09 16:04:21 compute-0 ceph-mon[75222]: purged_snaps scrub starts
Dec 09 16:04:21 compute-0 ceph-mon[75222]: purged_snaps scrub ok
Dec 09 16:04:21 compute-0 ceph-mon[75222]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 16:04:21 compute-0 ceph-mon[75222]: osdmap e11: 3 total, 1 up, 3 in
Dec 09 16:04:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 09 16:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1255b3c86ee9ccec416b919f7cc33b1b655a0238cb42ffed3992f1d3c301c39-merged.mount: Deactivated successfully.
Dec 09 16:04:21 compute-0 podman[87821]: 2025-12-09 16:04:21.079869971 +0000 UTC m=+1.576952619 container remove 086e2b8fe61be60ddbc454dd2c36170e6a9f27e3538c521edd435b11d6c1bf91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:04:21 compute-0 podman[88081]: 2025-12-09 16:04:21.313863802 +0000 UTC m=+0.045748465 container create 2c04f740a8b253d8ecd8e18a6e47e557960887b7213c8498c38edd5db483ed78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6642f7dd12681e1bf5c7fb0ded327a91c47ff4d7980f5ce1bae6614248b368c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6642f7dd12681e1bf5c7fb0ded327a91c47ff4d7980f5ce1bae6614248b368c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6642f7dd12681e1bf5c7fb0ded327a91c47ff4d7980f5ce1bae6614248b368c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6642f7dd12681e1bf5c7fb0ded327a91c47ff4d7980f5ce1bae6614248b368c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6642f7dd12681e1bf5c7fb0ded327a91c47ff4d7980f5ce1bae6614248b368c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:21 compute-0 podman[88081]: 2025-12-09 16:04:21.386752823 +0000 UTC m=+0.118637486 container init 2c04f740a8b253d8ecd8e18a6e47e557960887b7213c8498c38edd5db483ed78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:21 compute-0 podman[88081]: 2025-12-09 16:04:21.292807765 +0000 UTC m=+0.024692478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:21 compute-0 podman[88081]: 2025-12-09 16:04:21.39401352 +0000 UTC m=+0.125898143 container start 2c04f740a8b253d8ecd8e18a6e47e557960887b7213c8498c38edd5db483ed78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:04:21 compute-0 bash[88081]: 2c04f740a8b253d8ecd8e18a6e47e557960887b7213c8498c38edd5db483ed78
Dec 09 16:04:21 compute-0 systemd[1]: Started Ceph osd.2 for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:04:21 compute-0 ceph-osd[88099]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: pidfile_write: ignore empty --pid-file
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 sudo[87103]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54400 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d54000 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-osd[88099]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 09 16:04:21 compute-0 sudo[88119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:21 compute-0 ceph-osd[88099]: load: jerasure load: lrc 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 sudo[88119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:21 compute-0 sudo[88119]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 sudo[88153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:04:21 compute-0 sudo[88153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 09 16:04:21 compute-0 ceph-osd[88099]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f216d55c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount shared_bdev_used = 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: RocksDB version: 7.9.2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Git sha 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: DB SUMMARY
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: DB Session ID:  U5DP6KZ4Y6UGQWTVUO6I
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: CURRENT file:  CURRENT
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                         Options.error_if_exists: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.create_if_missing: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                                     Options.env: 0x55f216be5ea0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                                Options.info_log: 0x55f217c368a0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                              Options.statistics: (nil)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.use_fsync: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                              Options.db_log_dir: 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                                 Options.wal_dir: db.wal
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.write_buffer_manager: 0x55f216c4ab40
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.unordered_write: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.row_cache: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                              Options.wal_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.two_write_queues: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.wal_compression: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.atomic_flush: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.max_background_jobs: 4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.max_background_compactions: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.max_subcompactions: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.max_open_files: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Compression algorithms supported:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kZSTD supported: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kXpressCompression supported: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kBZip2Compression supported: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kLZ4Compression supported: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kZlibCompression supported: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kSnappyCompression supported: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7a92c78e-394e-4b2f-b06d-af1665a7b8fd
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296261816373, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296261817992, "job": 1, "event": "recovery_finished"}
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: freelist init
Dec 09 16:04:21 compute-0 ceph-osd[88099]: freelist _read_cfg
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs umount
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) close
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bdev(0x55f2179eb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluefs mount shared_bdev_used = 27262976
Dec 09 16:04:21 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: RocksDB version: 7.9.2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Git sha 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: DB SUMMARY
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: DB Session ID:  U5DP6KZ4Y6UGQWTVUO6J
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: CURRENT file:  CURRENT
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                         Options.error_if_exists: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.create_if_missing: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                                     Options.env: 0x55f217e06a80
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                                Options.info_log: 0x55f217c36a20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                              Options.statistics: (nil)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.use_fsync: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                              Options.db_log_dir: 
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                                 Options.wal_dir: db.wal
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.write_buffer_manager: 0x55f216c4b900
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.unordered_write: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.row_cache: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                              Options.wal_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.two_write_queues: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.wal_compression: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.atomic_flush: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.max_background_jobs: 4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.max_background_compactions: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.max_subcompactions: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.max_open_files: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Compression algorithms supported:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kZSTD supported: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kXpressCompression supported: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kBZip2Compression supported: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kLZ4Compression supported: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kZlibCompression supported: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         kSnappyCompression supported: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c36bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c370c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c370c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:           Options.merge_operator: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f217c370c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f216be9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.compression: LZ4
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.num_levels: 7
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.bloom_locality: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                               Options.ttl: 2592000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                       Options.enable_blob_files: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                           Options.min_blob_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7a92c78e-394e-4b2f-b06d-af1665a7b8fd
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296261876820, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296261891644, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296261, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a92c78e-394e-4b2f-b06d-af1665a7b8fd", "db_session_id": "U5DP6KZ4Y6UGQWTVUO6J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296261909253, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296261, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a92c78e-394e-4b2f-b06d-af1665a7b8fd", "db_session_id": "U5DP6KZ4Y6UGQWTVUO6J", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296261912321, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296261, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a92c78e-394e-4b2f-b06d-af1665a7b8fd", "db_session_id": "U5DP6KZ4Y6UGQWTVUO6J", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296261932813, "job": 1, "event": "recovery_finished"}
Dec 09 16:04:21 compute-0 ceph-osd[88099]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 09 16:04:21 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3995149440; not ready for session (expect reconnect)
Dec 09 16:04:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:21 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:21 compute-0 podman[88578]: 2025-12-09 16:04:21.988875058 +0000 UTC m=+0.064893099 container create 23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:04:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 09 16:04:22 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 09 16:04:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Dec 09 16:04:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f217e1a000
Dec 09 16:04:22 compute-0 ceph-osd[88099]: rocksdb: DB pointer 0x55f217df0000
Dec 09 16:04:22 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 09 16:04:22 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 09 16:04:22 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 09 16:04:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:04:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:04:22 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Dec 09 16:04:22 compute-0 ceph-osd[88099]: bluestore.MempoolThread fragmentation_score=0.000194 took=0.000005s
Dec 09 16:04:22 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 09 16:04:22 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 09 16:04:22 compute-0 ceph-osd[88099]: _get_class not permitted to load lua
Dec 09 16:04:22 compute-0 ceph-osd[88099]: _get_class not permitted to load sdk
Dec 09 16:04:22 compute-0 ceph-osd[88099]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 09 16:04:22 compute-0 ceph-osd[88099]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 09 16:04:22 compute-0 ceph-osd[88099]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 09 16:04:22 compute-0 ceph-osd[88099]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 09 16:04:22 compute-0 ceph-osd[88099]: osd.2 0 load_pgs
Dec 09 16:04:22 compute-0 ceph-osd[88099]: osd.2 0 load_pgs opened 0 pgs
Dec 09 16:04:22 compute-0 ceph-osd[88099]: osd.2 0 log_to_monitors true
Dec 09 16:04:22 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2[88095]: 2025-12-09T16:04:22.028+0000 7feb56bdd8c0 -1 osd.2 0 log_to_monitors true
Dec 09 16:04:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:22 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:22 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:22 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:22 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:22 compute-0 systemd[1]: Started libpod-conmon-23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689.scope.
Dec 09 16:04:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 09 16:04:22 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 09 16:04:22 compute-0 podman[88578]: 2025-12-09 16:04:21.947610389 +0000 UTC m=+0.023628450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 09 16:04:22 compute-0 ceph-mon[75222]: osdmap e12: 3 total, 1 up, 3 in
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 09 16:04:22 compute-0 ceph-mon[75222]: osdmap e13: 3 total, 1 up, 3 in
Dec 09 16:04:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:22 compute-0 podman[88578]: 2025-12-09 16:04:22.092461926 +0000 UTC m=+0.168479987 container init 23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:22 compute-0 podman[88578]: 2025-12-09 16:04:22.100578042 +0000 UTC m=+0.176596083 container start 23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:04:22 compute-0 loving_cartwright[88628]: 167 167
Dec 09 16:04:22 compute-0 systemd[1]: libpod-23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689.scope: Deactivated successfully.
Dec 09 16:04:22 compute-0 podman[88578]: 2025-12-09 16:04:22.110302081 +0000 UTC m=+0.186320122 container attach 23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:22 compute-0 podman[88578]: 2025-12-09 16:04:22.111172694 +0000 UTC m=+0.187190735 container died 23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:04:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb7ec46fff6d0a1157b0546b983698747d9cb5f41595fd6ae9776eaa4621f6c7-merged.mount: Deactivated successfully.
Dec 09 16:04:22 compute-0 podman[88578]: 2025-12-09 16:04:22.175587749 +0000 UTC m=+0.251605790 container remove 23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:04:22 compute-0 systemd[1]: libpod-conmon-23f4d2f9d91e7db8869420e210ea2867192738f0198826f13b3dbc19932f7689.scope: Deactivated successfully.
Dec 09 16:04:22 compute-0 podman[88650]: 2025-12-09 16:04:22.33525998 +0000 UTC m=+0.049905840 container create fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:04:22 compute-0 systemd[1]: Started libpod-conmon-fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa.scope.
Dec 09 16:04:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:22 compute-0 podman[88650]: 2025-12-09 16:04:22.310663095 +0000 UTC m=+0.025309155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a623a570398a13be38b877d59475553531da63f5df93c354ae196539cff45bcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a623a570398a13be38b877d59475553531da63f5df93c354ae196539cff45bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a623a570398a13be38b877d59475553531da63f5df93c354ae196539cff45bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a623a570398a13be38b877d59475553531da63f5df93c354ae196539cff45bcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:22 compute-0 podman[88650]: 2025-12-09 16:04:22.417702655 +0000 UTC m=+0.132348535 container init fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:04:22 compute-0 podman[88650]: 2025-12-09 16:04:22.425734529 +0000 UTC m=+0.140380389 container start fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mirzakhani, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:04:22 compute-0 podman[88650]: 2025-12-09 16:04:22.431982305 +0000 UTC m=+0.146628195 container attach fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.126 iops: 8224.234 elapsed_sec: 0.365
Dec 09 16:04:22 compute-0 ceph-osd[87055]: log_channel(cluster) log [WRN] : OSD bench result of 8224.233575 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 0 waiting for initial osdmap
Dec 09 16:04:22 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1[87051]: 2025-12-09T16:04:22.445+0000 7f03f261f640 -1 osd.1 0 waiting for initial osdmap
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 13 set_numa_affinity not setting numa affinity
Dec 09 16:04:22 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-1[87051]: 2025-12-09T16:04:22.468+0000 7f03ecc12640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 09 16:04:22 compute-0 ceph-osd[87055]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec 09 16:04:22 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3995149440; not ready for session (expect reconnect)
Dec 09 16:04:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:22 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:22 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 16:04:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 09 16:04:23 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 09 16:04:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec 09 16:04:23 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440] boot
Dec 09 16:04:23 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec 09 16:04:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 09 16:04:23 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 09 16:04:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 09 16:04:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 16:04:23 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:23 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:23 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:23 compute-0 ceph-osd[87055]: osd.1 14 state: booting -> active
Dec 09 16:04:23 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:04:23 compute-0 ceph-mon[75222]: pgmap v31: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 09 16:04:23 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:23 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:23 compute-0 ceph-mon[75222]: from='osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 09 16:04:23 compute-0 ceph-mon[75222]: OSD bench result of 8224.233575 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 16:04:23 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:23 compute-0 ceph-mon[75222]: from='osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 09 16:04:23 compute-0 ceph-mon[75222]: osd.1 [v2:192.168.122.100:6806/3995149440,v1:192.168.122.100:6807/3995149440] boot
Dec 09 16:04:23 compute-0 ceph-mon[75222]: osdmap e14: 3 total, 2 up, 3 in
Dec 09 16:04:23 compute-0 ceph-mon[75222]: from='osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 09 16:04:23 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 09 16:04:23 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:23 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 09 16:04:23 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 09 16:04:23 compute-0 lvm[88741]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:04:23 compute-0 lvm[88741]: VG ceph_vg0 finished
Dec 09 16:04:23 compute-0 lvm[88743]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:04:23 compute-0 lvm[88743]: VG ceph_vg1 finished
Dec 09 16:04:23 compute-0 lvm[88744]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:04:23 compute-0 lvm[88744]: VG ceph_vg0 finished
Dec 09 16:04:23 compute-0 lvm[88745]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:23 compute-0 lvm[88745]: VG ceph_vg2 finished
Dec 09 16:04:23 compute-0 amazing_mirzakhani[88666]: {}
Dec 09 16:04:23 compute-0 systemd[1]: libpod-fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa.scope: Deactivated successfully.
Dec 09 16:04:23 compute-0 systemd[1]: libpod-fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa.scope: Consumed 1.366s CPU time.
Dec 09 16:04:23 compute-0 podman[88748]: 2025-12-09 16:04:23.324914488 +0000 UTC m=+0.030684778 container died fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mirzakhani, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a623a570398a13be38b877d59475553531da63f5df93c354ae196539cff45bcd-merged.mount: Deactivated successfully.
Dec 09 16:04:23 compute-0 podman[88748]: 2025-12-09 16:04:23.369222248 +0000 UTC m=+0.074992518 container remove fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:23 compute-0 systemd[1]: libpod-conmon-fd0f265f0916dd30b65e90a18af6443863c2579614feba3408a0590c08a76baa.scope: Deactivated successfully.
Dec 09 16:04:23 compute-0 sudo[88153]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:23 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:23 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:23 compute-0 sudo[88764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:04:23 compute-0 sudo[88764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:23 compute-0 sudo[88764]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:23 compute-0 sudo[88789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:23 compute-0 sudo[88789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:23 compute-0 sudo[88789]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:23 compute-0 sudo[88814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:04:23 compute-0 sudo[88814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:23 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 09 16:04:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 09 16:04:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec 09 16:04:24 compute-0 ceph-osd[88099]: osd.2 0 done with init, starting boot process
Dec 09 16:04:24 compute-0 ceph-osd[88099]: osd.2 0 start_boot
Dec 09 16:04:24 compute-0 ceph-osd[88099]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 09 16:04:24 compute-0 ceph-osd[88099]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 09 16:04:24 compute-0 ceph-osd[88099]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 09 16:04:24 compute-0 ceph-osd[88099]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 09 16:04:24 compute-0 ceph-osd[88099]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec 09 16:04:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:24 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:24 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:04:24 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1632989575; not ready for session (expect reconnect)
Dec 09 16:04:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:24 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:24 compute-0 ceph-mgr[75515]: [devicehealth INFO root] creating main.db for devicehealth
Dec 09 16:04:24 compute-0 podman[88882]: 2025-12-09 16:04:24.177138008 +0000 UTC m=+0.076711723 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:24 compute-0 ceph-mgr[75515]: [devicehealth INFO root] Check health
Dec 09 16:04:24 compute-0 ceph-mgr[75515]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 09 16:04:24 compute-0 sudo[88913]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 09 16:04:24 compute-0 sudo[88913]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 16:04:24 compute-0 sudo[88913]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 09 16:04:24 compute-0 sudo[88913]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 09 16:04:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 09 16:04:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:24 compute-0 podman[88882]: 2025-12-09 16:04:24.289995643 +0000 UTC m=+0.189569338 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:04:24 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:24 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:24 compute-0 ceph-mon[75222]: from='osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 09 16:04:24 compute-0 ceph-mon[75222]: osdmap e15: 3 total, 2 up, 3 in
Dec 09 16:04:24 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:24 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:24 compute-0 ceph-mon[75222]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 09 16:04:24 compute-0 ceph-mon[75222]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 09 16:04:24 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 09 16:04:24 compute-0 sudo[88814]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 09 16:04:25 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1632989575; not ready for session (expect reconnect)
Dec 09 16:04:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:25 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Dec 09 16:04:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Dec 09 16:04:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:25 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:25 compute-0 sudo[89039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:25 compute-0 sudo[89039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:25 compute-0 sudo[89039]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:25 compute-0 sudo[89065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- inventory --format=json-pretty --filter-for-batch
Dec 09 16:04:25 compute-0 sudo[89065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:25 compute-0 podman[89103]: 2025-12-09 16:04:25.435100541 +0000 UTC m=+0.049850148 container create 8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:25 compute-0 ceph-mon[75222]: purged_snaps scrub starts
Dec 09 16:04:25 compute-0 ceph-mon[75222]: purged_snaps scrub ok
Dec 09 16:04:25 compute-0 ceph-mon[75222]: pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 09 16:04:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:25 compute-0 ceph-mon[75222]: osdmap e16: 3 total, 2 up, 3 in
Dec 09 16:04:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:25 compute-0 systemd[1]: Started libpod-conmon-8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f.scope.
Dec 09 16:04:25 compute-0 podman[89103]: 2025-12-09 16:04:25.413367842 +0000 UTC m=+0.028117459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:25 compute-0 podman[89103]: 2025-12-09 16:04:25.551236823 +0000 UTC m=+0.165986480 container init 8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:04:25 compute-0 podman[89103]: 2025-12-09 16:04:25.558578738 +0000 UTC m=+0.173328355 container start 8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:04:25 compute-0 zen_engelbart[89119]: 167 167
Dec 09 16:04:25 compute-0 systemd[1]: libpod-8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f.scope: Deactivated successfully.
Dec 09 16:04:25 compute-0 conmon[89119]: conmon 8ebcc1efad562926a6c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f.scope/container/memory.events
Dec 09 16:04:25 compute-0 podman[89103]: 2025-12-09 16:04:25.575299704 +0000 UTC m=+0.190049321 container attach 8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:04:25 compute-0 podman[89103]: 2025-12-09 16:04:25.576123525 +0000 UTC m=+0.190873172 container died 8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c192f1cded071cd9d9cf0c0925ccbbcd6631f2adcd0a9a9c3311398df1a7a3c4-merged.mount: Deactivated successfully.
Dec 09 16:04:25 compute-0 podman[89103]: 2025-12-09 16:04:25.701986327 +0000 UTC m=+0.316735954 container remove 8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:25 compute-0 systemd[1]: libpod-conmon-8ebcc1efad562926a6c9b4c0795f4b3c94b07e009db63edf70304ca823e4b09f.scope: Deactivated successfully.
Dec 09 16:04:25 compute-0 podman[89145]: 2025-12-09 16:04:25.8666369 +0000 UTC m=+0.046951261 container create 44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhaskara, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:25 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 09 16:04:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:04:25
Dec 09 16:04:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:04:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Some PGs (1.000000) are inactive; try again later
Dec 09 16:04:25 compute-0 systemd[1]: Started libpod-conmon-44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc.scope.
Dec 09 16:04:25 compute-0 podman[89145]: 2025-12-09 16:04:25.847774218 +0000 UTC m=+0.028088599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c7a763437af9f78e3f1f478e2c2b95e7c21701b504f78b17d9d41690ccb5c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c7a763437af9f78e3f1f478e2c2b95e7c21701b504f78b17d9d41690ccb5c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c7a763437af9f78e3f1f478e2c2b95e7c21701b504f78b17d9d41690ccb5c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c7a763437af9f78e3f1f478e2c2b95e7c21701b504f78b17d9d41690ccb5c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:26 compute-0 podman[89145]: 2025-12-09 16:04:26.001879581 +0000 UTC m=+0.182193972 container init 44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:04:26 compute-0 podman[89145]: 2025-12-09 16:04:26.007930282 +0000 UTC m=+0.188244643 container start 44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhaskara, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:04:26 compute-0 podman[89145]: 2025-12-09 16:04:26.026837776 +0000 UTC m=+0.207152197 container attach 44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhaskara, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1632989575; not ready for session (expect reconnect)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ysegzv(active, since 60s)
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:04:26 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mgrmap e9: compute-0.ysegzv(active, since 60s)
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]: [
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:     {
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "available": false,
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "being_replaced": false,
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "ceph_device_lvm": false,
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "lsm_data": {},
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "lvs": [],
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "path": "/dev/sr0",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "rejected_reasons": [
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "Has a FileSystem",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "Insufficient space (<5GB)"
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         ],
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         "sys_api": {
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "actuators": null,
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "device_nodes": [
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:                 "sr0"
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             ],
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "devname": "sr0",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "human_readable_size": "482.00 KB",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "id_bus": "ata",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "model": "QEMU DVD-ROM",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "nr_requests": "2",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "parent": "/dev/sr0",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "partitions": {},
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "path": "/dev/sr0",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "removable": "1",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "rev": "2.5+",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "ro": "0",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "rotational": "1",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "sas_address": "",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "sas_device_handle": "",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "scheduler_mode": "mq-deadline",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "sectors": 0,
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "sectorsize": "2048",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "size": 493568.0,
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "support_discard": "2048",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "type": "disk",
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:             "vendor": "QEMU"
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:         }
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]:     }
Dec 09 16:04:26 compute-0 peaceful_bhaskara[89161]: ]
Dec 09 16:04:26 compute-0 systemd[1]: libpod-44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc.scope: Deactivated successfully.
Dec 09 16:04:26 compute-0 podman[89145]: 2025-12-09 16:04:26.618789675 +0000 UTC m=+0.799104076 container died 44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-55c7a763437af9f78e3f1f478e2c2b95e7c21701b504f78b17d9d41690ccb5c5-merged.mount: Deactivated successfully.
Dec 09 16:04:26 compute-0 podman[89145]: 2025-12-09 16:04:26.723859362 +0000 UTC m=+0.904173733 container remove 44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 09 16:04:26 compute-0 systemd[1]: libpod-conmon-44bbbee1202bf31ba39be99656d7e43da99e7f0ae293dded7e7af3b3f65dabbc.scope: Deactivated successfully.
Dec 09 16:04:26 compute-0 sudo[89065]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43686k
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43686k
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Dec 09 16:04:26 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:04:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:04:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:26 compute-0 sudo[89908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:26 compute-0 sudo[89908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:26 compute-0 sudo[89908]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:26 compute-0 sudo[89933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:04:27 compute-0 sudo[89933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:27 compute-0 ceph-mgr[75515]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1632989575; not ready for session (expect reconnect)
Dec 09 16:04:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:27 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:27 compute-0 ceph-mgr[75515]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.834 iops: 8149.514 elapsed_sec: 0.368
Dec 09 16:04:27 compute-0 ceph-osd[88099]: log_channel(cluster) log [WRN] : OSD bench result of 8149.513983 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 0 waiting for initial osdmap
Dec 09 16:04:27 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2[88095]: 2025-12-09T16:04:27.226+0000 7feb52b5f640 -1 osd.2 0 waiting for initial osdmap
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Dec 09 16:04:27 compute-0 sudo[89992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyjukkfrceqiiyjjpmjspoheunthwcml ; /usr/bin/python3'
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 16 set_numa_affinity not setting numa affinity
Dec 09 16:04:27 compute-0 sudo[89992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:27 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-osd-2[88095]: 2025-12-09T16:04:27.248+0000 7feb4d964640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec 09 16:04:27 compute-0 podman[89994]: 2025-12-09 16:04:27.317308283 +0000 UTC m=+0.059769813 container create ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:27 compute-0 systemd[1]: Started libpod-conmon-ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8.scope.
Dec 09 16:04:27 compute-0 podman[89994]: 2025-12-09 16:04:27.29428041 +0000 UTC m=+0.036742040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:27 compute-0 podman[89994]: 2025-12-09 16:04:27.404025722 +0000 UTC m=+0.146487282 container init ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_raman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 16:04:27 compute-0 python3[89996]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:27 compute-0 podman[89994]: 2025-12-09 16:04:27.411242994 +0000 UTC m=+0.153704554 container start ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:27 compute-0 crazy_raman[90011]: 167 167
Dec 09 16:04:27 compute-0 systemd[1]: libpod-ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8.scope: Deactivated successfully.
Dec 09 16:04:27 compute-0 podman[89994]: 2025-12-09 16:04:27.416149944 +0000 UTC m=+0.158611484 container attach ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:27 compute-0 podman[89994]: 2025-12-09 16:04:27.416489754 +0000 UTC m=+0.158951274 container died ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_raman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 16:04:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-746c3d05d6bdcc6367197e5ae31c285dc1ecaf18077755736129002324d6c3e3-merged.mount: Deactivated successfully.
Dec 09 16:04:27 compute-0 podman[89994]: 2025-12-09 16:04:27.455011279 +0000 UTC m=+0.197472799 container remove ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:27 compute-0 systemd[1]: libpod-conmon-ac22333709e2949b49a1a8a6f1cd30d0e767f8865b7d1a5cd594aee999a3a8b8.scope: Deactivated successfully.
Dec 09 16:04:27 compute-0 podman[90016]: 2025-12-09 16:04:27.481746901 +0000 UTC m=+0.056733822 container create c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f (image=quay.io/ceph/ceph:v20, name=naughty_keldysh, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:04:27 compute-0 systemd[1]: Started libpod-conmon-c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f.scope.
Dec 09 16:04:27 compute-0 ceph-mon[75222]: pgmap v37: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76bb9bc9dda7e67c9260efc2a35d4325e62d06e27d32c3a7f19ba74142040fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76bb9bc9dda7e67c9260efc2a35d4325e62d06e27d32c3a7f19ba74142040fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76bb9bc9dda7e67c9260efc2a35d4325e62d06e27d32c3a7f19ba74142040fa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:27 compute-0 podman[90016]: 2025-12-09 16:04:27.461656156 +0000 UTC m=+0.036643107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:27 compute-0 podman[90016]: 2025-12-09 16:04:27.567408032 +0000 UTC m=+0.142394983 container init c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f (image=quay.io/ceph/ceph:v20, name=naughty_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:27 compute-0 podman[90016]: 2025-12-09 16:04:27.57374488 +0000 UTC m=+0.148731811 container start c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f (image=quay.io/ceph/ceph:v20, name=naughty_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:04:27 compute-0 podman[90016]: 2025-12-09 16:04:27.577700856 +0000 UTC m=+0.152687767 container attach c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f (image=quay.io/ceph/ceph:v20, name=naughty_keldysh, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:04:27 compute-0 podman[90058]: 2025-12-09 16:04:27.648203173 +0000 UTC m=+0.052281303 container create 1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:04:27 compute-0 systemd[1]: Started libpod-conmon-1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e.scope.
Dec 09 16:04:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a272fd556debecfcead1c603c7b002331e9f1a6bacbca73b8a321148bfdcd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a272fd556debecfcead1c603c7b002331e9f1a6bacbca73b8a321148bfdcd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a272fd556debecfcead1c603c7b002331e9f1a6bacbca73b8a321148bfdcd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a272fd556debecfcead1c603c7b002331e9f1a6bacbca73b8a321148bfdcd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a272fd556debecfcead1c603c7b002331e9f1a6bacbca73b8a321148bfdcd7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:27 compute-0 podman[90058]: 2025-12-09 16:04:27.718507455 +0000 UTC m=+0.122585565 container init 1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:27 compute-0 podman[90058]: 2025-12-09 16:04:27.628066727 +0000 UTC m=+0.032144857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:27 compute-0 podman[90058]: 2025-12-09 16:04:27.72883886 +0000 UTC m=+0.132916980 container start 1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:27 compute-0 podman[90058]: 2025-12-09 16:04:27.735448946 +0000 UTC m=+0.139527076 container attach 1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:04:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 09 16:04:27 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 09 16:04:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec 09 16:04:27 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575] boot
Dec 09 16:04:27 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec 09 16:04:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 16:04:27 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:27 compute-0 ceph-osd[88099]: osd.2 17 state: booting -> active
Dec 09 16:04:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 09 16:04:28 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/851119243' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 09 16:04:28 compute-0 naughty_keldysh[90049]: 
Dec 09 16:04:28 compute-0 naughty_keldysh[90049]: {"fsid":"67f67f44-54fc-54ea-8df0-10931b6ecdaf","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":83,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1765296267,"num_in_osds":3,"osd_in_since":1765296245,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":474656768,"bytes_avail":42466627584,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2025-12-09T16:03:01:781860+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-09T16:03:01.783688+0000","services":{}},"progress_events":{}}
Dec 09 16:04:28 compute-0 systemd[1]: libpod-c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f.scope: Deactivated successfully.
Dec 09 16:04:28 compute-0 podman[90111]: 2025-12-09 16:04:28.149419347 +0000 UTC m=+0.030818091 container died c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f (image=quay.io/ceph/ceph:v20, name=naughty_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a76bb9bc9dda7e67c9260efc2a35d4325e62d06e27d32c3a7f19ba74142040fa-merged.mount: Deactivated successfully.
Dec 09 16:04:28 compute-0 podman[90111]: 2025-12-09 16:04:28.189154885 +0000 UTC m=+0.070553599 container remove c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f (image=quay.io/ceph/ceph:v20, name=naughty_keldysh, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:04:28 compute-0 systemd[1]: libpod-conmon-c0afdef34f556b7e1b21513640c2aaf8e4db4c51fd58c09dd8869bb85d4a3b0f.scope: Deactivated successfully.
Dec 09 16:04:28 compute-0 zealous_antonelli[90076]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:04:28 compute-0 zealous_antonelli[90076]: --> All data devices are unavailable
Dec 09 16:04:28 compute-0 sudo[89992]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:28 compute-0 systemd[1]: libpod-1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e.scope: Deactivated successfully.
Dec 09 16:04:28 compute-0 podman[90058]: 2025-12-09 16:04:28.220714986 +0000 UTC m=+0.624793096 container died 1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-06a272fd556debecfcead1c603c7b002331e9f1a6bacbca73b8a321148bfdcd7-merged.mount: Deactivated successfully.
Dec 09 16:04:28 compute-0 podman[90058]: 2025-12-09 16:04:28.267569653 +0000 UTC m=+0.671647763 container remove 1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:28 compute-0 systemd[1]: libpod-conmon-1fd073c35db41f6e8d8f966b727d27e86179b0912810f6ee6818dd9be21b991e.scope: Deactivated successfully.
Dec 09 16:04:28 compute-0 sudo[89933]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:28 compute-0 sudo[90139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:28 compute-0 sudo[90139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:28 compute-0 sudo[90139]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:28 compute-0 sudo[90164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:04:28 compute-0 sudo[90164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:28 compute-0 sudo[90212]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovchzpkempuoqafywdksophseabutupl ; /usr/bin/python3'
Dec 09 16:04:28 compute-0 sudo[90212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:28 compute-0 ceph-mon[75222]: Adjusting osd_memory_target on compute-0 to 43686k
Dec 09 16:04:28 compute-0 ceph-mon[75222]: Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Dec 09 16:04:28 compute-0 ceph-mon[75222]: OSD bench result of 8149.513983 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 16:04:28 compute-0 ceph-mon[75222]: osd.2 [v2:192.168.122.100:6810/1632989575,v1:192.168.122.100:6811/1632989575] boot
Dec 09 16:04:28 compute-0 ceph-mon[75222]: osdmap e17: 3 total, 3 up, 3 in
Dec 09 16:04:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 09 16:04:28 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/851119243' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 09 16:04:28 compute-0 python3[90214]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:28 compute-0 podman[90226]: 2025-12-09 16:04:28.681789431 +0000 UTC m=+0.043554870 container create 7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_moser, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:28 compute-0 podman[90233]: 2025-12-09 16:04:28.708643487 +0000 UTC m=+0.048195845 container create df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8 (image=quay.io/ceph/ceph:v20, name=elegant_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:04:28 compute-0 systemd[1]: Started libpod-conmon-7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c.scope.
Dec 09 16:04:28 compute-0 systemd[1]: Started libpod-conmon-df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8.scope.
Dec 09 16:04:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d74486d0371104afdf9adebd02c1192a53ec5d517e8a2fdc26e866db422377/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d74486d0371104afdf9adebd02c1192a53ec5d517e8a2fdc26e866db422377/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:28 compute-0 podman[90226]: 2025-12-09 16:04:28.753602673 +0000 UTC m=+0.115368142 container init 7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_moser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:04:28 compute-0 podman[90233]: 2025-12-09 16:04:28.75533401 +0000 UTC m=+0.094886358 container init df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8 (image=quay.io/ceph/ceph:v20, name=elegant_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:28 compute-0 podman[90226]: 2025-12-09 16:04:28.661022149 +0000 UTC m=+0.022787638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:28 compute-0 podman[90226]: 2025-12-09 16:04:28.759263094 +0000 UTC m=+0.121028533 container start 7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 09 16:04:28 compute-0 podman[90233]: 2025-12-09 16:04:28.761420702 +0000 UTC m=+0.100973050 container start df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8 (image=quay.io/ceph/ceph:v20, name=elegant_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:28 compute-0 podman[90226]: 2025-12-09 16:04:28.76323792 +0000 UTC m=+0.125003389 container attach 7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 09 16:04:28 compute-0 systemd[1]: libpod-7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c.scope: Deactivated successfully.
Dec 09 16:04:28 compute-0 conmon[90255]: conmon 7b5f52362ace8923f1f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c.scope/container/memory.events
Dec 09 16:04:28 compute-0 brave_moser[90255]: 167 167
Dec 09 16:04:28 compute-0 podman[90226]: 2025-12-09 16:04:28.764447982 +0000 UTC m=+0.126213421 container died 7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_moser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:28 compute-0 podman[90233]: 2025-12-09 16:04:28.783088329 +0000 UTC m=+0.122640677 container attach df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8 (image=quay.io/ceph/ceph:v20, name=elegant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:04:28 compute-0 podman[90233]: 2025-12-09 16:04:28.691176351 +0000 UTC m=+0.030728719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4720cbd06fc780d5afae37cabf355217a19ade64c459d11f73b1ab17ebcc0329-merged.mount: Deactivated successfully.
Dec 09 16:04:28 compute-0 podman[90226]: 2025-12-09 16:04:28.807529329 +0000 UTC m=+0.169294768 container remove 7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_moser, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:28 compute-0 systemd[1]: libpod-conmon-7b5f52362ace8923f1f28f316948e5214efc2cce778986b9f3fb7f7dbf6e310c.scope: Deactivated successfully.
Dec 09 16:04:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 09 16:04:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec 09 16:04:28 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec 09 16:04:28 compute-0 podman[90306]: 2025-12-09 16:04:28.989889545 +0000 UTC m=+0.053278520 container create 238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 09 16:04:29 compute-0 systemd[1]: Started libpod-conmon-238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785.scope.
Dec 09 16:04:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:29 compute-0 podman[90306]: 2025-12-09 16:04:28.968923006 +0000 UTC m=+0.032311961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b19aa7bb871d9fb014f2737965397273b8c707a0e4d050dace6ab49ab4dc68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b19aa7bb871d9fb014f2737965397273b8c707a0e4d050dace6ab49ab4dc68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b19aa7bb871d9fb014f2737965397273b8c707a0e4d050dace6ab49ab4dc68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b19aa7bb871d9fb014f2737965397273b8c707a0e4d050dace6ab49ab4dc68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:29 compute-0 podman[90306]: 2025-12-09 16:04:29.082913731 +0000 UTC m=+0.146302726 container init 238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dijkstra, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:29 compute-0 podman[90306]: 2025-12-09 16:04:29.094334705 +0000 UTC m=+0.157723660 container start 238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:29 compute-0 podman[90306]: 2025-12-09 16:04:29.098103356 +0000 UTC m=+0.161492341 container attach 238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dijkstra, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 16:04:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1511116680' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]: {
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:     "0": [
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:         {
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "devices": [
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "/dev/loop3"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             ],
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_name": "ceph_lv0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_size": "21470642176",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "name": "ceph_lv0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "tags": {
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.crush_device_class": "",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.encrypted": "0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osd_id": "0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.type": "block",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.vdo": "0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.with_tpm": "0"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             },
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "type": "block",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "vg_name": "ceph_vg0"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:         }
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:     ],
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:     "1": [
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:         {
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "devices": [
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "/dev/loop4"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             ],
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_name": "ceph_lv1",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_size": "21470642176",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "name": "ceph_lv1",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "tags": {
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.crush_device_class": "",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.encrypted": "0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osd_id": "1",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.type": "block",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.vdo": "0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.with_tpm": "0"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             },
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "type": "block",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "vg_name": "ceph_vg1"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:         }
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:     ],
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:     "2": [
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:         {
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "devices": [
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "/dev/loop5"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             ],
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_name": "ceph_lv2",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_size": "21470642176",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "name": "ceph_lv2",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "tags": {
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.crush_device_class": "",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.encrypted": "0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osd_id": "2",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.type": "block",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.vdo": "0",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:                 "ceph.with_tpm": "0"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             },
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "type": "block",
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:             "vg_name": "ceph_vg2"
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:         }
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]:     ]
Dec 09 16:04:29 compute-0 pensive_dijkstra[90323]: }
Dec 09 16:04:29 compute-0 systemd[1]: libpod-238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785.scope: Deactivated successfully.
Dec 09 16:04:29 compute-0 podman[90306]: 2025-12-09 16:04:29.436267259 +0000 UTC m=+0.499656194 container died 238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dijkstra, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-07b19aa7bb871d9fb014f2737965397273b8c707a0e4d050dace6ab49ab4dc68-merged.mount: Deactivated successfully.
Dec 09 16:04:29 compute-0 podman[90306]: 2025-12-09 16:04:29.485369806 +0000 UTC m=+0.548758741 container remove 238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:04:29 compute-0 systemd[1]: libpod-conmon-238e3779c3764b7402c0552ec1e4d45f2db7a3dfef75aea42905b2020885f785.scope: Deactivated successfully.
Dec 09 16:04:29 compute-0 sudo[90164]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:29 compute-0 ceph-mon[75222]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 09 16:04:29 compute-0 ceph-mon[75222]: osdmap e18: 3 total, 3 up, 3 in
Dec 09 16:04:29 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1511116680' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:29 compute-0 sudo[90348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:29 compute-0 sudo[90348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:29 compute-0 sudo[90348]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:29 compute-0 sudo[90373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:04:29 compute-0 sudo[90373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:29 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 09 16:04:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1511116680' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 09 16:04:29 compute-0 elegant_shockley[90260]: pool 'vms' created
Dec 09 16:04:29 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 09 16:04:29 compute-0 systemd[1]: libpod-df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8.scope: Deactivated successfully.
Dec 09 16:04:29 compute-0 podman[90411]: 2025-12-09 16:04:29.920428009 +0000 UTC m=+0.039751440 container create 2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_euler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:29 compute-0 podman[90423]: 2025-12-09 16:04:29.949042101 +0000 UTC m=+0.030770731 container died df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8 (image=quay.io/ceph/ceph:v20, name=elegant_shockley, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:04:29 compute-0 systemd[1]: Started libpod-conmon-2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a.scope.
Dec 09 16:04:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:04:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-35d74486d0371104afdf9adebd02c1192a53ec5d517e8a2fdc26e866db422377-merged.mount: Deactivated successfully.
Dec 09 16:04:29 compute-0 podman[90423]: 2025-12-09 16:04:29.987976817 +0000 UTC m=+0.069705427 container remove df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8 (image=quay.io/ceph/ceph:v20, name=elegant_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 09 16:04:29 compute-0 systemd[1]: libpod-conmon-df3dcedfb1d25885011a6452827c0bcc6d95fa5b3adecd9e0340c5a900801ff8.scope: Deactivated successfully.
Dec 09 16:04:29 compute-0 podman[90411]: 2025-12-09 16:04:29.998287792 +0000 UTC m=+0.117611293 container init 2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:30 compute-0 podman[90411]: 2025-12-09 16:04:29.902568083 +0000 UTC m=+0.021891534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:30 compute-0 podman[90411]: 2025-12-09 16:04:30.00459744 +0000 UTC m=+0.123920911 container start 2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:30 compute-0 vigorous_euler[90441]: 167 167
Dec 09 16:04:30 compute-0 systemd[1]: libpod-2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a.scope: Deactivated successfully.
Dec 09 16:04:30 compute-0 podman[90411]: 2025-12-09 16:04:30.008196646 +0000 UTC m=+0.127520087 container attach 2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_euler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:04:30 compute-0 conmon[90441]: conmon 2eb8b11d292b3fca5f7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a.scope/container/memory.events
Dec 09 16:04:30 compute-0 podman[90411]: 2025-12-09 16:04:30.012589553 +0000 UTC m=+0.131913004 container died 2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_euler, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:04:30 compute-0 sudo[90212]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ed752ffcf4df3f4c5e8346104346d3bf9f9a42b609791b72bbd9aefa2aacd6b-merged.mount: Deactivated successfully.
Dec 09 16:04:30 compute-0 podman[90411]: 2025-12-09 16:04:30.051063007 +0000 UTC m=+0.170386438 container remove 2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:04:30 compute-0 systemd[1]: libpod-conmon-2eb8b11d292b3fca5f7ed0f532d9e095ae80e6653c253485030b6f37b9fee02a.scope: Deactivated successfully.
Dec 09 16:04:30 compute-0 sudo[90491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukoluqnhdvuccgrzotrkughuumsqwslq ; /usr/bin/python3'
Dec 09 16:04:30 compute-0 sudo[90491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:30 compute-0 podman[90489]: 2025-12-09 16:04:30.240235093 +0000 UTC m=+0.050635499 container create 07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 09 16:04:30 compute-0 systemd[1]: Started libpod-conmon-07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804.scope.
Dec 09 16:04:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:30 compute-0 podman[90489]: 2025-12-09 16:04:30.21831349 +0000 UTC m=+0.028713916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22f2577306b61e79e6f0a3d06f486576abee9202894832928cb5e003a9bd9b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22f2577306b61e79e6f0a3d06f486576abee9202894832928cb5e003a9bd9b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22f2577306b61e79e6f0a3d06f486576abee9202894832928cb5e003a9bd9b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22f2577306b61e79e6f0a3d06f486576abee9202894832928cb5e003a9bd9b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:30 compute-0 podman[90489]: 2025-12-09 16:04:30.354117626 +0000 UTC m=+0.164518072 container init 07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_booth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:04:30 compute-0 podman[90489]: 2025-12-09 16:04:30.366640739 +0000 UTC m=+0.177041155 container start 07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_booth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:30 compute-0 podman[90489]: 2025-12-09 16:04:30.370708377 +0000 UTC m=+0.181108793 container attach 07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 09 16:04:30 compute-0 python3[90498]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:30 compute-0 podman[90514]: 2025-12-09 16:04:30.437489735 +0000 UTC m=+0.043827788 container create ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419 (image=quay.io/ceph/ceph:v20, name=pensive_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 16:04:30 compute-0 systemd[1]: Started libpod-conmon-ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419.scope.
Dec 09 16:04:30 compute-0 podman[90514]: 2025-12-09 16:04:30.418023057 +0000 UTC m=+0.024361130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21952ff3901d7c39c55e5610289e461acced0bca2e490bc1236e66cacaa469db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21952ff3901d7c39c55e5610289e461acced0bca2e490bc1236e66cacaa469db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:30 compute-0 podman[90514]: 2025-12-09 16:04:30.53157279 +0000 UTC m=+0.137910853 container init ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419 (image=quay.io/ceph/ceph:v20, name=pensive_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:30 compute-0 podman[90514]: 2025-12-09 16:04:30.537619471 +0000 UTC m=+0.143957504 container start ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419 (image=quay.io/ceph/ceph:v20, name=pensive_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:30 compute-0 podman[90514]: 2025-12-09 16:04:30.541161156 +0000 UTC m=+0.147499219 container attach ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419 (image=quay.io/ceph/ceph:v20, name=pensive_cori, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:04:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 09 16:04:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 09 16:04:30 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 09 16:04:30 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:04:30 compute-0 ceph-mon[75222]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:30 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1511116680' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:30 compute-0 ceph-mon[75222]: osdmap e19: 3 total, 3 up, 3 in
Dec 09 16:04:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 16:04:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3035517319' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:31 compute-0 lvm[90630]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:04:31 compute-0 lvm[90630]: VG ceph_vg1 finished
Dec 09 16:04:31 compute-0 lvm[90629]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:04:31 compute-0 lvm[90629]: VG ceph_vg0 finished
Dec 09 16:04:31 compute-0 lvm[90632]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:31 compute-0 lvm[90632]: VG ceph_vg2 finished
Dec 09 16:04:31 compute-0 sweet_booth[90509]: {}
Dec 09 16:04:31 compute-0 systemd[1]: libpod-07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804.scope: Deactivated successfully.
Dec 09 16:04:31 compute-0 systemd[1]: libpod-07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804.scope: Consumed 1.526s CPU time.
Dec 09 16:04:31 compute-0 podman[90489]: 2025-12-09 16:04:31.304598332 +0000 UTC m=+1.114998738 container died 07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_booth, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a22f2577306b61e79e6f0a3d06f486576abee9202894832928cb5e003a9bd9b8-merged.mount: Deactivated successfully.
Dec 09 16:04:31 compute-0 podman[90489]: 2025-12-09 16:04:31.354256024 +0000 UTC m=+1.164656420 container remove 07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:31 compute-0 systemd[1]: libpod-conmon-07e1200764da77f81ff124308732ba0966f8b1d56d6068f9ae6295f206eaa804.scope: Deactivated successfully.
Dec 09 16:04:31 compute-0 sudo[90373]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:31 compute-0 sudo[90647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:04:31 compute-0 sudo[90647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:31 compute-0 sudo[90647]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:31 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v44: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 09 16:04:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3035517319' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 09 16:04:31 compute-0 pensive_cori[90530]: pool 'volumes' created
Dec 09 16:04:31 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 09 16:04:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:04:31 compute-0 ceph-mon[75222]: osdmap e20: 3 total, 3 up, 3 in
Dec 09 16:04:31 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3035517319' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:31 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3035517319' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:31 compute-0 ceph-mon[75222]: osdmap e21: 3 total, 3 up, 3 in
Dec 09 16:04:31 compute-0 systemd[1]: libpod-ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419.scope: Deactivated successfully.
Dec 09 16:04:31 compute-0 conmon[90530]: conmon ed66c341418e53449406 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419.scope/container/memory.events
Dec 09 16:04:31 compute-0 podman[90514]: 2025-12-09 16:04:31.915853136 +0000 UTC m=+1.522191179 container died ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419 (image=quay.io/ceph/ceph:v20, name=pensive_cori, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-21952ff3901d7c39c55e5610289e461acced0bca2e490bc1236e66cacaa469db-merged.mount: Deactivated successfully.
Dec 09 16:04:31 compute-0 podman[90514]: 2025-12-09 16:04:31.954171496 +0000 UTC m=+1.560509529 container remove ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419 (image=quay.io/ceph/ceph:v20, name=pensive_cori, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:31 compute-0 systemd[1]: libpod-conmon-ed66c341418e53449406967acf3ba57be43e45b62dc46842fd4391847f5c7419.scope: Deactivated successfully.
Dec 09 16:04:31 compute-0 sudo[90491]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:32 compute-0 sudo[90706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqdbfnuktjbxbdwijsctabpqqrlmwffc ; /usr/bin/python3'
Dec 09 16:04:32 compute-0 sudo[90706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:32 compute-0 python3[90708]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:32 compute-0 podman[90709]: 2025-12-09 16:04:32.375389531 +0000 UTC m=+0.055561131 container create d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173 (image=quay.io/ceph/ceph:v20, name=elastic_turing, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:04:32 compute-0 systemd[1]: Started libpod-conmon-d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173.scope.
Dec 09 16:04:32 compute-0 podman[90709]: 2025-12-09 16:04:32.346281936 +0000 UTC m=+0.026453586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:32 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d295de0f004fbfb10e8af8ac1b88add862603af71d525608ac6b607aef3c4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d295de0f004fbfb10e8af8ac1b88add862603af71d525608ac6b607aef3c4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:32 compute-0 podman[90709]: 2025-12-09 16:04:32.469106726 +0000 UTC m=+0.149278366 container init d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173 (image=quay.io/ceph/ceph:v20, name=elastic_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:04:32 compute-0 podman[90709]: 2025-12-09 16:04:32.478101645 +0000 UTC m=+0.158273205 container start d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173 (image=quay.io/ceph/ceph:v20, name=elastic_turing, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:04:32 compute-0 podman[90709]: 2025-12-09 16:04:32.481809984 +0000 UTC m=+0.161981544 container attach d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173 (image=quay.io/ceph/ceph:v20, name=elastic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 09 16:04:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 16:04:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/874523532' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 09 16:04:32 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:04:32 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 09 16:04:33 compute-0 ceph-mon[75222]: pgmap v44: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:33 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/874523532' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:33 compute-0 ceph-mon[75222]: osdmap e22: 3 total, 3 up, 3 in
Dec 09 16:04:33 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v47: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 09 16:04:33 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/874523532' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 09 16:04:33 compute-0 elastic_turing[90724]: pool 'backups' created
Dec 09 16:04:33 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 09 16:04:33 compute-0 systemd[1]: libpod-d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173.scope: Deactivated successfully.
Dec 09 16:04:34 compute-0 podman[90751]: 2025-12-09 16:04:34.006562519 +0000 UTC m=+0.047027623 container died d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173 (image=quay.io/ceph/ceph:v20, name=elastic_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8d295de0f004fbfb10e8af8ac1b88add862603af71d525608ac6b607aef3c4e-merged.mount: Deactivated successfully.
Dec 09 16:04:34 compute-0 podman[90751]: 2025-12-09 16:04:34.046237035 +0000 UTC m=+0.086702109 container remove d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173 (image=quay.io/ceph/ceph:v20, name=elastic_turing, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:04:34 compute-0 systemd[1]: libpod-conmon-d0f23608052c529de6467d6b21a529a7d66bbaaf6737d8f9a982375082b47173.scope: Deactivated successfully.
Dec 09 16:04:34 compute-0 sudo[90706]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:34 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:04:34 compute-0 sudo[90788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwfoyqnygxnppyhhedneemdxsaqnpukx ; /usr/bin/python3'
Dec 09 16:04:34 compute-0 sudo[90788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:34 compute-0 python3[90790]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:34 compute-0 podman[90791]: 2025-12-09 16:04:34.461940243 +0000 UTC m=+0.045610455 container create fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815 (image=quay.io/ceph/ceph:v20, name=thirsty_wright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 09 16:04:34 compute-0 systemd[1]: Started libpod-conmon-fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815.scope.
Dec 09 16:04:34 compute-0 podman[90791]: 2025-12-09 16:04:34.441606502 +0000 UTC m=+0.025276744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:34 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741debfe0142a804506f65f2fbcb7ab7719ae640b03835ab1534a5818ad0defe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741debfe0142a804506f65f2fbcb7ab7719ae640b03835ab1534a5818ad0defe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:34 compute-0 podman[90791]: 2025-12-09 16:04:34.559981793 +0000 UTC m=+0.143652085 container init fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815 (image=quay.io/ceph/ceph:v20, name=thirsty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:04:34 compute-0 podman[90791]: 2025-12-09 16:04:34.566853046 +0000 UTC m=+0.150523268 container start fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815 (image=quay.io/ceph/ceph:v20, name=thirsty_wright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:34 compute-0 podman[90791]: 2025-12-09 16:04:34.570907354 +0000 UTC m=+0.154577566 container attach fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815 (image=quay.io/ceph/ceph:v20, name=thirsty_wright, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:04:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 09 16:04:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 09 16:04:34 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 09 16:04:34 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:04:34 compute-0 ceph-mon[75222]: pgmap v47: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:34 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/874523532' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:34 compute-0 ceph-mon[75222]: osdmap e23: 3 total, 3 up, 3 in
Dec 09 16:04:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 16:04:34 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/226109469' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:35 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v50: 4 pgs: 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 09 16:04:35 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/226109469' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 09 16:04:35 compute-0 thirsty_wright[90807]: pool 'images' created
Dec 09 16:04:35 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 09 16:04:35 compute-0 ceph-mon[75222]: osdmap e24: 3 total, 3 up, 3 in
Dec 09 16:04:35 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/226109469' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:35 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/226109469' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:35 compute-0 ceph-mon[75222]: osdmap e25: 3 total, 3 up, 3 in
Dec 09 16:04:35 compute-0 systemd[1]: libpod-fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815.scope: Deactivated successfully.
Dec 09 16:04:35 compute-0 podman[90791]: 2025-12-09 16:04:35.946708454 +0000 UTC m=+1.530378706 container died fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815 (image=quay.io/ceph/ceph:v20, name=thirsty_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-741debfe0142a804506f65f2fbcb7ab7719ae640b03835ab1534a5818ad0defe-merged.mount: Deactivated successfully.
Dec 09 16:04:35 compute-0 podman[90791]: 2025-12-09 16:04:35.992703369 +0000 UTC m=+1.576373591 container remove fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815 (image=quay.io/ceph/ceph:v20, name=thirsty_wright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:36 compute-0 systemd[1]: libpod-conmon-fd2ea3dbd6527a4f66456e352359bd6e5ac788a3db0dbbe6c15e0cce37bdf815.scope: Deactivated successfully.
Dec 09 16:04:36 compute-0 sudo[90788]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:36 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:04:36 compute-0 sudo[90868]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbfrbenfklkofqmblwtnlkbxjrpnbdmv ; /usr/bin/python3'
Dec 09 16:04:36 compute-0 sudo[90868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:36 compute-0 python3[90870]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:36 compute-0 podman[90871]: 2025-12-09 16:04:36.428009999 +0000 UTC m=+0.068825324 container create 26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c (image=quay.io/ceph/ceph:v20, name=angry_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:36 compute-0 systemd[1]: Started libpod-conmon-26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c.scope.
Dec 09 16:04:36 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:36 compute-0 podman[90871]: 2025-12-09 16:04:36.40401996 +0000 UTC m=+0.044835335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f816fe8ffb7f64c4552c434955e1962cedeeaf3ed479ca9cc1dea1b632688d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f816fe8ffb7f64c4552c434955e1962cedeeaf3ed479ca9cc1dea1b632688d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:36 compute-0 podman[90871]: 2025-12-09 16:04:36.549462702 +0000 UTC m=+0.190278047 container init 26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c (image=quay.io/ceph/ceph:v20, name=angry_keldysh, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:36 compute-0 podman[90871]: 2025-12-09 16:04:36.556402437 +0000 UTC m=+0.197217772 container start 26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c (image=quay.io/ceph/ceph:v20, name=angry_keldysh, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:04:36 compute-0 podman[90871]: 2025-12-09 16:04:36.559553001 +0000 UTC m=+0.200368336 container attach 26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c (image=quay.io/ceph/ceph:v20, name=angry_keldysh, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:04:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 09 16:04:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 09 16:04:36 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 09 16:04:36 compute-0 ceph-mon[75222]: pgmap v50: 4 pgs: 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:36 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:04:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 16:04:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3491484733' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:37 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v53: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 09 16:04:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3491484733' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 09 16:04:37 compute-0 angry_keldysh[90887]: pool 'cephfs.cephfs.meta' created
Dec 09 16:04:37 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 09 16:04:37 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:04:37 compute-0 ceph-mon[75222]: osdmap e26: 3 total, 3 up, 3 in
Dec 09 16:04:37 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3491484733' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:37 compute-0 systemd[1]: libpod-26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c.scope: Deactivated successfully.
Dec 09 16:04:37 compute-0 podman[90871]: 2025-12-09 16:04:37.993543118 +0000 UTC m=+1.634358483 container died 26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c (image=quay.io/ceph/ceph:v20, name=angry_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-85f816fe8ffb7f64c4552c434955e1962cedeeaf3ed479ca9cc1dea1b632688d-merged.mount: Deactivated successfully.
Dec 09 16:04:38 compute-0 podman[90871]: 2025-12-09 16:04:38.034284803 +0000 UTC m=+1.675100138 container remove 26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c (image=quay.io/ceph/ceph:v20, name=angry_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:04:38 compute-0 systemd[1]: libpod-conmon-26c8c08f8894cff8648a8e5ccbf99c4dddb8e1507b950b828ffa9f314f84579c.scope: Deactivated successfully.
Dec 09 16:04:38 compute-0 sudo[90868]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:38 compute-0 sudo[90947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwwyoadcncsjsfnmzrpwcltojqbrymsi ; /usr/bin/python3'
Dec 09 16:04:38 compute-0 sudo[90947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:38 compute-0 python3[90949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:38 compute-0 podman[90950]: 2025-12-09 16:04:38.50743074 +0000 UTC m=+0.064118578 container create 9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45 (image=quay.io/ceph/ceph:v20, name=suspicious_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:04:38 compute-0 systemd[1]: Started libpod-conmon-9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45.scope.
Dec 09 16:04:38 compute-0 podman[90950]: 2025-12-09 16:04:38.477468603 +0000 UTC m=+0.034156531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0ab5e9641a0ac8545643a59c88dd6898ee4d9f1fdd6361f0a741a752529e8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0ab5e9641a0ac8545643a59c88dd6898ee4d9f1fdd6361f0a741a752529e8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:38 compute-0 podman[90950]: 2025-12-09 16:04:38.600957561 +0000 UTC m=+0.157645439 container init 9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45 (image=quay.io/ceph/ceph:v20, name=suspicious_stonebraker, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 09 16:04:38 compute-0 podman[90950]: 2025-12-09 16:04:38.610993088 +0000 UTC m=+0.167680936 container start 9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45 (image=quay.io/ceph/ceph:v20, name=suspicious_stonebraker, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:38 compute-0 podman[90950]: 2025-12-09 16:04:38.614356077 +0000 UTC m=+0.171043925 container attach 9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45 (image=quay.io/ceph/ceph:v20, name=suspicious_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 09 16:04:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 09 16:04:38 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 09 16:04:38 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:04:38 compute-0 ceph-mon[75222]: pgmap v53: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:38 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3491484733' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:38 compute-0 ceph-mon[75222]: osdmap e27: 3 total, 3 up, 3 in
Dec 09 16:04:38 compute-0 ceph-mon[75222]: osdmap e28: 3 total, 3 up, 3 in
Dec 09 16:04:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 16:04:39 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3022308522' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:39 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v56: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 09 16:04:39 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3022308522' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 09 16:04:39 compute-0 suspicious_stonebraker[90965]: pool 'cephfs.cephfs.data' created
Dec 09 16:04:39 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 09 16:04:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=0/0 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:04:39 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3022308522' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 09 16:04:39 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3022308522' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 16:04:39 compute-0 ceph-mon[75222]: osdmap e29: 3 total, 3 up, 3 in
Dec 09 16:04:39 compute-0 systemd[1]: libpod-9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45.scope: Deactivated successfully.
Dec 09 16:04:39 compute-0 podman[90950]: 2025-12-09 16:04:39.992075158 +0000 UTC m=+1.548763026 container died 9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45 (image=quay.io/ceph/ceph:v20, name=suspicious_stonebraker, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e0ab5e9641a0ac8545643a59c88dd6898ee4d9f1fdd6361f0a741a752529e8b-merged.mount: Deactivated successfully.
Dec 09 16:04:40 compute-0 podman[90950]: 2025-12-09 16:04:40.033682086 +0000 UTC m=+1.590369944 container remove 9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45 (image=quay.io/ceph/ceph:v20, name=suspicious_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:40 compute-0 systemd[1]: libpod-conmon-9499dc568f10adbc7b33a55db163c6012b172b3213ff9a97a2895cc13601db45.scope: Deactivated successfully.
Dec 09 16:04:40 compute-0 sudo[90947]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:40 compute-0 sudo[91026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wryxfxzefkomjshtiwkerbxsdjtjfjht ; /usr/bin/python3'
Dec 09 16:04:40 compute-0 sudo[91026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:40 compute-0 python3[91028]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:40 compute-0 podman[91029]: 2025-12-09 16:04:40.485147095 +0000 UTC m=+0.054489802 container create 43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585 (image=quay.io/ceph/ceph:v20, name=musing_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:04:40 compute-0 systemd[1]: Started libpod-conmon-43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585.scope.
Dec 09 16:04:40 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800f96c4fb0db97c7487cffe91ee007186942a2c050040e8397d3e6c6dc82579/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800f96c4fb0db97c7487cffe91ee007186942a2c050040e8397d3e6c6dc82579/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:40 compute-0 podman[91029]: 2025-12-09 16:04:40.458546407 +0000 UTC m=+0.027889174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:40 compute-0 podman[91029]: 2025-12-09 16:04:40.562790222 +0000 UTC m=+0.132132929 container init 43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585 (image=quay.io/ceph/ceph:v20, name=musing_jackson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:04:40 compute-0 podman[91029]: 2025-12-09 16:04:40.569503351 +0000 UTC m=+0.138846038 container start 43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585 (image=quay.io/ceph/ceph:v20, name=musing_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 16:04:40 compute-0 podman[91029]: 2025-12-09 16:04:40.572784108 +0000 UTC m=+0.142126835 container attach 43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585 (image=quay.io/ceph/ceph:v20, name=musing_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 09 16:04:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 09 16:04:40 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 09 16:04:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 09 16:04:40 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/338686529' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 09 16:04:40 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 30 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:04:40 compute-0 ceph-mon[75222]: pgmap v56: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:40 compute-0 ceph-mon[75222]: osdmap e30: 3 total, 3 up, 3 in
Dec 09 16:04:40 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/338686529' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 09 16:04:41 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 09 16:04:41 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/338686529' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 09 16:04:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 09 16:04:41 compute-0 musing_jackson[91044]: enabled application 'rbd' on pool 'vms'
Dec 09 16:04:41 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 09 16:04:42 compute-0 systemd[1]: libpod-43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585.scope: Deactivated successfully.
Dec 09 16:04:42 compute-0 podman[91029]: 2025-12-09 16:04:42.005270518 +0000 UTC m=+1.574613225 container died 43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585 (image=quay.io/ceph/ceph:v20, name=musing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-800f96c4fb0db97c7487cffe91ee007186942a2c050040e8397d3e6c6dc82579-merged.mount: Deactivated successfully.
Dec 09 16:04:42 compute-0 podman[91029]: 2025-12-09 16:04:42.060031366 +0000 UTC m=+1.629374083 container remove 43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585 (image=quay.io/ceph/ceph:v20, name=musing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:42 compute-0 systemd[1]: libpod-conmon-43a336ce2d98c52680f495344027da3c7af8dc49acc01472085aa1c9e113e585.scope: Deactivated successfully.
Dec 09 16:04:42 compute-0 sudo[91026]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:42 compute-0 sudo[91104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smdzyjhajdskqkatgoqgaytcuegglkpn ; /usr/bin/python3'
Dec 09 16:04:42 compute-0 sudo[91104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:42 compute-0 python3[91106]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:42 compute-0 podman[91107]: 2025-12-09 16:04:42.481246891 +0000 UTC m=+0.054593845 container create f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4 (image=quay.io/ceph/ceph:v20, name=wonderful_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:42 compute-0 systemd[1]: Started libpod-conmon-f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4.scope.
Dec 09 16:04:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccf4e485309e2247ad5ccd33ac4d300e7c487a8776b681da882b018eed4aeb7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccf4e485309e2247ad5ccd33ac4d300e7c487a8776b681da882b018eed4aeb7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:42 compute-0 podman[91107]: 2025-12-09 16:04:42.552671022 +0000 UTC m=+0.126017996 container init f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4 (image=quay.io/ceph/ceph:v20, name=wonderful_villani, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:04:42 compute-0 podman[91107]: 2025-12-09 16:04:42.464765482 +0000 UTC m=+0.038112466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:42 compute-0 podman[91107]: 2025-12-09 16:04:42.561888148 +0000 UTC m=+0.135235112 container start f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4 (image=quay.io/ceph/ceph:v20, name=wonderful_villani, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:42 compute-0 podman[91107]: 2025-12-09 16:04:42.565165985 +0000 UTC m=+0.138512959 container attach f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4 (image=quay.io/ceph/ceph:v20, name=wonderful_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:04:42 compute-0 ceph-mon[75222]: pgmap v59: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:42 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/338686529' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 09 16:04:42 compute-0 ceph-mon[75222]: osdmap e31: 3 total, 3 up, 3 in
Dec 09 16:04:42 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 09 16:04:42 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3962534612' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 09 16:04:43 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 09 16:04:43 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3962534612' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 09 16:04:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3962534612' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 09 16:04:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 09 16:04:44 compute-0 wonderful_villani[91122]: enabled application 'rbd' on pool 'volumes'
Dec 09 16:04:44 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 09 16:04:44 compute-0 systemd[1]: libpod-f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4.scope: Deactivated successfully.
Dec 09 16:04:44 compute-0 podman[91107]: 2025-12-09 16:04:44.024363824 +0000 UTC m=+1.597710788 container died f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4 (image=quay.io/ceph/ceph:v20, name=wonderful_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fccf4e485309e2247ad5ccd33ac4d300e7c487a8776b681da882b018eed4aeb7-merged.mount: Deactivated successfully.
Dec 09 16:04:44 compute-0 podman[91107]: 2025-12-09 16:04:44.059251073 +0000 UTC m=+1.632598027 container remove f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4 (image=quay.io/ceph/ceph:v20, name=wonderful_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:44 compute-0 systemd[1]: libpod-conmon-f9ccbf48d7b01db35016cfa21a862a7acbf98118336d179558c69aa1f46f57a4.scope: Deactivated successfully.
Dec 09 16:04:44 compute-0 sudo[91104]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:44 compute-0 sudo[91179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smrzvdijfqwbfodbmjzmaetdsomminbk ; /usr/bin/python3'
Dec 09 16:04:44 compute-0 sudo[91179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:44 compute-0 python3[91181]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:44 compute-0 podman[91182]: 2025-12-09 16:04:44.433682532 +0000 UTC m=+0.049320704 container create 95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 09 16:04:44 compute-0 systemd[1]: Started libpod-conmon-95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9.scope.
Dec 09 16:04:44 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafd2e29b94d6dd5d648004acb164e1087bea5e82d4d86db142596b16b54074e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafd2e29b94d6dd5d648004acb164e1087bea5e82d4d86db142596b16b54074e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:44 compute-0 podman[91182]: 2025-12-09 16:04:44.411464461 +0000 UTC m=+0.027102633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:44 compute-0 podman[91182]: 2025-12-09 16:04:44.574742648 +0000 UTC m=+0.190380800 container init 95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 09 16:04:44 compute-0 podman[91182]: 2025-12-09 16:04:44.581367514 +0000 UTC m=+0.197005686 container start 95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:44 compute-0 podman[91182]: 2025-12-09 16:04:44.585120134 +0000 UTC m=+0.200758286 container attach 95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:04:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 09 16:04:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/74715882' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 09 16:04:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 09 16:04:45 compute-0 ceph-mon[75222]: pgmap v61: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:45 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3962534612' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 09 16:04:45 compute-0 ceph-mon[75222]: osdmap e32: 3 total, 3 up, 3 in
Dec 09 16:04:45 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/74715882' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 09 16:04:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/74715882' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 09 16:04:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 09 16:04:45 compute-0 pedantic_lumiere[91197]: enabled application 'rbd' on pool 'backups'
Dec 09 16:04:45 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 09 16:04:45 compute-0 systemd[1]: libpod-95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9.scope: Deactivated successfully.
Dec 09 16:04:45 compute-0 podman[91182]: 2025-12-09 16:04:45.032630489 +0000 UTC m=+0.648268631 container died 95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-aafd2e29b94d6dd5d648004acb164e1087bea5e82d4d86db142596b16b54074e-merged.mount: Deactivated successfully.
Dec 09 16:04:45 compute-0 podman[91182]: 2025-12-09 16:04:45.06947378 +0000 UTC m=+0.685111922 container remove 95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:45 compute-0 systemd[1]: libpod-conmon-95bd57784ef1862e503eea0a59d3222ee920c4b87e9ea07322b241f666385fa9.scope: Deactivated successfully.
Dec 09 16:04:45 compute-0 sudo[91179]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:45 compute-0 sudo[91257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wibbknjgdvntcdqvzadjsurlamgpfubr ; /usr/bin/python3'
Dec 09 16:04:45 compute-0 sudo[91257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:45 compute-0 python3[91259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:45 compute-0 podman[91260]: 2025-12-09 16:04:45.468212566 +0000 UTC m=+0.067587970 container create 7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43 (image=quay.io/ceph/ceph:v20, name=compassionate_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 09 16:04:45 compute-0 systemd[1]: Started libpod-conmon-7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43.scope.
Dec 09 16:04:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b88f391534a56a861ec1d08be19ab477fbef17b07f3da57b372c2c311eb3a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b88f391534a56a861ec1d08be19ab477fbef17b07f3da57b372c2c311eb3a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:45 compute-0 podman[91260]: 2025-12-09 16:04:45.44093459 +0000 UTC m=+0.040309974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:45 compute-0 podman[91260]: 2025-12-09 16:04:45.542672429 +0000 UTC m=+0.142047883 container init 7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43 (image=quay.io/ceph/ceph:v20, name=compassionate_tu, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:04:45 compute-0 podman[91260]: 2025-12-09 16:04:45.552922482 +0000 UTC m=+0.152297856 container start 7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43 (image=quay.io/ceph/ceph:v20, name=compassionate_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:04:45 compute-0 podman[91260]: 2025-12-09 16:04:45.556094986 +0000 UTC m=+0.155470370 container attach 7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43 (image=quay.io/ceph/ceph:v20, name=compassionate_tu, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:45 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 09 16:04:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4265420435' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 09 16:04:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 09 16:04:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4265420435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 09 16:04:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 09 16:04:46 compute-0 compassionate_tu[91275]: enabled application 'rbd' on pool 'images'
Dec 09 16:04:46 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/74715882' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 09 16:04:46 compute-0 ceph-mon[75222]: osdmap e33: 3 total, 3 up, 3 in
Dec 09 16:04:46 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4265420435' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 09 16:04:46 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 09 16:04:46 compute-0 systemd[1]: libpod-7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43.scope: Deactivated successfully.
Dec 09 16:04:46 compute-0 podman[91260]: 2025-12-09 16:04:46.045536027 +0000 UTC m=+0.644911441 container died 7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43 (image=quay.io/ceph/ceph:v20, name=compassionate_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-48b88f391534a56a861ec1d08be19ab477fbef17b07f3da57b372c2c311eb3a4-merged.mount: Deactivated successfully.
Dec 09 16:04:46 compute-0 podman[91260]: 2025-12-09 16:04:46.087672549 +0000 UTC m=+0.687047913 container remove 7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43 (image=quay.io/ceph/ceph:v20, name=compassionate_tu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:46 compute-0 systemd[1]: libpod-conmon-7162f2388d699e8e392f9d805fcddd9487adba86a7c0c10e154c3eb4f84aec43.scope: Deactivated successfully.
Dec 09 16:04:46 compute-0 sudo[91257]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:46 compute-0 sudo[91333]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skpnczjxqqavmoovltzzbumcmjlbjjrp ; /usr/bin/python3'
Dec 09 16:04:46 compute-0 sudo[91333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:46 compute-0 python3[91335]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:46 compute-0 podman[91336]: 2025-12-09 16:04:46.438762787 +0000 UTC m=+0.041923698 container create b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f (image=quay.io/ceph/ceph:v20, name=tender_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:46 compute-0 systemd[1]: Started libpod-conmon-b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f.scope.
Dec 09 16:04:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792f260ab05b41710d47ffd15dda4b8601ea4a3dc8ee946e021dc6f3b1fa67a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792f260ab05b41710d47ffd15dda4b8601ea4a3dc8ee946e021dc6f3b1fa67a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:46 compute-0 podman[91336]: 2025-12-09 16:04:46.423676665 +0000 UTC m=+0.026837566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:46 compute-0 podman[91336]: 2025-12-09 16:04:46.531826734 +0000 UTC m=+0.134987625 container init b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f (image=quay.io/ceph/ceph:v20, name=tender_hypatia, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:04:46 compute-0 podman[91336]: 2025-12-09 16:04:46.54104323 +0000 UTC m=+0.144204111 container start b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f (image=quay.io/ceph/ceph:v20, name=tender_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:46 compute-0 podman[91336]: 2025-12-09 16:04:46.544729178 +0000 UTC m=+0.147890049 container attach b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f (image=quay.io/ceph/ceph:v20, name=tender_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 09 16:04:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/907906848' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 09 16:04:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 09 16:04:47 compute-0 ceph-mon[75222]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:47 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4265420435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 09 16:04:47 compute-0 ceph-mon[75222]: osdmap e34: 3 total, 3 up, 3 in
Dec 09 16:04:47 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/907906848' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 09 16:04:47 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/907906848' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 09 16:04:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 09 16:04:47 compute-0 tender_hypatia[91352]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 09 16:04:47 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 09 16:04:47 compute-0 systemd[1]: libpod-b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f.scope: Deactivated successfully.
Dec 09 16:04:47 compute-0 podman[91336]: 2025-12-09 16:04:47.061850546 +0000 UTC m=+0.665011457 container died b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f (image=quay.io/ceph/ceph:v20, name=tender_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-792f260ab05b41710d47ffd15dda4b8601ea4a3dc8ee946e021dc6f3b1fa67a8-merged.mount: Deactivated successfully.
Dec 09 16:04:47 compute-0 podman[91336]: 2025-12-09 16:04:47.112793632 +0000 UTC m=+0.715954523 container remove b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f (image=quay.io/ceph/ceph:v20, name=tender_hypatia, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:04:47 compute-0 systemd[1]: libpod-conmon-b355d33ced67ef5ac2f7852188938c37a4d6b04dd8447838510b4ce8e734402f.scope: Deactivated successfully.
Dec 09 16:04:47 compute-0 sudo[91333]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:47 compute-0 sudo[91413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkgvktplcxxatalituttikblywwvffff ; /usr/bin/python3'
Dec 09 16:04:47 compute-0 sudo[91413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:47 compute-0 python3[91415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:47 compute-0 podman[91416]: 2025-12-09 16:04:47.49316675 +0000 UTC m=+0.053631449 container create 5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f (image=quay.io/ceph/ceph:v20, name=unruffled_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 16:04:47 compute-0 systemd[1]: Started libpod-conmon-5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f.scope.
Dec 09 16:04:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4f619e5f9783e9771cda015e12691f5a36e20de6b19ea15d54070abbcc49d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4f619e5f9783e9771cda015e12691f5a36e20de6b19ea15d54070abbcc49d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:47 compute-0 podman[91416]: 2025-12-09 16:04:47.465826432 +0000 UTC m=+0.026291121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:47 compute-0 podman[91416]: 2025-12-09 16:04:47.572302346 +0000 UTC m=+0.132767005 container init 5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f (image=quay.io/ceph/ceph:v20, name=unruffled_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:47 compute-0 podman[91416]: 2025-12-09 16:04:47.57733864 +0000 UTC m=+0.137803299 container start 5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f (image=quay.io/ceph/ceph:v20, name=unruffled_booth, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:47 compute-0 podman[91416]: 2025-12-09 16:04:47.580290838 +0000 UTC m=+0.140760748 container attach 5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f (image=quay.io/ceph/ceph:v20, name=unruffled_booth, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:04:47 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 09 16:04:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/382360941' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 09 16:04:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 09 16:04:48 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/907906848' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 09 16:04:48 compute-0 ceph-mon[75222]: osdmap e35: 3 total, 3 up, 3 in
Dec 09 16:04:48 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/382360941' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 09 16:04:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/382360941' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 09 16:04:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 09 16:04:48 compute-0 unruffled_booth[91432]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 09 16:04:48 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 09 16:04:48 compute-0 systemd[1]: libpod-5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f.scope: Deactivated successfully.
Dec 09 16:04:48 compute-0 podman[91416]: 2025-12-09 16:04:48.072643547 +0000 UTC m=+0.633108216 container died 5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f (image=quay.io/ceph/ceph:v20, name=unruffled_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b4f619e5f9783e9771cda015e12691f5a36e20de6b19ea15d54070abbcc49d4-merged.mount: Deactivated successfully.
Dec 09 16:04:48 compute-0 podman[91416]: 2025-12-09 16:04:48.108745888 +0000 UTC m=+0.669210547 container remove 5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f (image=quay.io/ceph/ceph:v20, name=unruffled_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:04:48 compute-0 systemd[1]: libpod-conmon-5a7cccb4a9173ea907fb91cba931b72d4f166d9c3f110fbe815d987be744f36f.scope: Deactivated successfully.
Dec 09 16:04:48 compute-0 sudo[91413]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:49 compute-0 ceph-mon[75222]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:49 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/382360941' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 09 16:04:49 compute-0 ceph-mon[75222]: osdmap e36: 3 total, 3 up, 3 in
Dec 09 16:04:49 compute-0 python3[91544]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:04:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:49 compute-0 python3[91615]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296288.856523-36757-84030933058460/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:04:49 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:50 compute-0 sudo[91715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqhrjkokiwdcqbngbqvxvnpiocqimadm ; /usr/bin/python3'
Dec 09 16:04:50 compute-0 sudo[91715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:50 compute-0 python3[91717]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:04:50 compute-0 sudo[91715]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:50 compute-0 sudo[91790]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrpesufirdwmablxrbosftdcwadzfgk ; /usr/bin/python3'
Dec 09 16:04:50 compute-0 sudo[91790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:50 compute-0 python3[91792]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296289.9371839-36771-5223970509620/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=c08e96fc9ed4af6dd597651ce6c551fd440827e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:04:50 compute-0 sudo[91790]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:50 compute-0 sudo[91840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnpcbtammmbekughpaoaxdebizpqltar ; /usr/bin/python3'
Dec 09 16:04:50 compute-0 sudo[91840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:51 compute-0 ceph-mon[75222]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:51 compute-0 python3[91842]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:51 compute-0 podman[91843]: 2025-12-09 16:04:51.186352717 +0000 UTC m=+0.042117572 container create 62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2 (image=quay.io/ceph/ceph:v20, name=dazzling_dewdney, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:51 compute-0 systemd[1]: Started libpod-conmon-62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2.scope.
Dec 09 16:04:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3abad0199f780042a38ec2d7c3707c1a9115b003e9dbcaa43289d443dc8fde/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3abad0199f780042a38ec2d7c3707c1a9115b003e9dbcaa43289d443dc8fde/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3abad0199f780042a38ec2d7c3707c1a9115b003e9dbcaa43289d443dc8fde/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:51 compute-0 podman[91843]: 2025-12-09 16:04:51.168993705 +0000 UTC m=+0.024758600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:51 compute-0 podman[91843]: 2025-12-09 16:04:51.285436995 +0000 UTC m=+0.141201900 container init 62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2 (image=quay.io/ceph/ceph:v20, name=dazzling_dewdney, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 09 16:04:51 compute-0 podman[91843]: 2025-12-09 16:04:51.291185738 +0000 UTC m=+0.146950613 container start 62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2 (image=quay.io/ceph/ceph:v20, name=dazzling_dewdney, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:51 compute-0 podman[91843]: 2025-12-09 16:04:51.29498866 +0000 UTC m=+0.150753505 container attach 62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2 (image=quay.io/ceph/ceph:v20, name=dazzling_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:04:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 09 16:04:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2412432011' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 09 16:04:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2412432011' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 09 16:04:51 compute-0 dazzling_dewdney[91858]: 
Dec 09 16:04:51 compute-0 dazzling_dewdney[91858]: [global]
Dec 09 16:04:51 compute-0 dazzling_dewdney[91858]:         fsid = 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:04:51 compute-0 dazzling_dewdney[91858]:         mon_host = 192.168.122.100
Dec 09 16:04:51 compute-0 dazzling_dewdney[91858]:         rgw_keystone_api_version = 3
Dec 09 16:04:51 compute-0 systemd[1]: libpod-62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2.scope: Deactivated successfully.
Dec 09 16:04:51 compute-0 podman[91843]: 2025-12-09 16:04:51.693618883 +0000 UTC m=+0.549383738 container died 62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2 (image=quay.io/ceph/ceph:v20, name=dazzling_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c3abad0199f780042a38ec2d7c3707c1a9115b003e9dbcaa43289d443dc8fde-merged.mount: Deactivated successfully.
Dec 09 16:04:51 compute-0 podman[91843]: 2025-12-09 16:04:51.73108032 +0000 UTC m=+0.586845175 container remove 62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2 (image=quay.io/ceph/ceph:v20, name=dazzling_dewdney, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:51 compute-0 sudo[91883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:51 compute-0 systemd[1]: libpod-conmon-62556dcae11287f1876dc3d01251deaa9347b433318e54dc22a7fba9b87abad2.scope: Deactivated successfully.
Dec 09 16:04:51 compute-0 sudo[91883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:51 compute-0 sudo[91840]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:51 compute-0 sudo[91883]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:51 compute-0 sudo[91921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:04:51 compute-0 sudo[91921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:51 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:51 compute-0 sudo[91969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxkgoaihrejrwcmdxnfqixzgyesozgzj ; /usr/bin/python3'
Dec 09 16:04:51 compute-0 sudo[91969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:52 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2412432011' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 09 16:04:52 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2412432011' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 09 16:04:52 compute-0 python3[91971]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:52 compute-0 podman[92003]: 2025-12-09 16:04:52.16373886 +0000 UTC m=+0.044131986 container create 65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436 (image=quay.io/ceph/ceph:v20, name=great_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:52 compute-0 systemd[1]: Started libpod-conmon-65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436.scope.
Dec 09 16:04:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c6ba1e4871c32f2c9ad6a8d29ce8d59bb6f5d724d2187c6020d89ee905ac819/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c6ba1e4871c32f2c9ad6a8d29ce8d59bb6f5d724d2187c6020d89ee905ac819/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c6ba1e4871c32f2c9ad6a8d29ce8d59bb6f5d724d2187c6020d89ee905ac819/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:52 compute-0 podman[92003]: 2025-12-09 16:04:52.142120624 +0000 UTC m=+0.022513790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:52 compute-0 podman[92003]: 2025-12-09 16:04:52.240944855 +0000 UTC m=+0.121338011 container init 65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436 (image=quay.io/ceph/ceph:v20, name=great_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:52 compute-0 podman[92028]: 2025-12-09 16:04:52.242144987 +0000 UTC m=+0.064130258 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:52 compute-0 podman[92003]: 2025-12-09 16:04:52.247805398 +0000 UTC m=+0.128198534 container start 65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436 (image=quay.io/ceph/ceph:v20, name=great_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:52 compute-0 podman[92003]: 2025-12-09 16:04:52.250967372 +0000 UTC m=+0.131360538 container attach 65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436 (image=quay.io/ceph/ceph:v20, name=great_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:52 compute-0 podman[92028]: 2025-12-09 16:04:52.342200321 +0000 UTC m=+0.164185622 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 09 16:04:52 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157267882' entity='client.admin' 
Dec 09 16:04:52 compute-0 great_haslett[92041]: set ssl_option
Dec 09 16:04:52 compute-0 systemd[1]: libpod-65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436.scope: Deactivated successfully.
Dec 09 16:04:52 compute-0 podman[92003]: 2025-12-09 16:04:52.822137859 +0000 UTC m=+0.702530975 container died 65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436 (image=quay.io/ceph/ceph:v20, name=great_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c6ba1e4871c32f2c9ad6a8d29ce8d59bb6f5d724d2187c6020d89ee905ac819-merged.mount: Deactivated successfully.
Dec 09 16:04:52 compute-0 podman[92003]: 2025-12-09 16:04:52.878953972 +0000 UTC m=+0.759347128 container remove 65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436 (image=quay.io/ceph/ceph:v20, name=great_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:52 compute-0 systemd[1]: libpod-conmon-65d0243c14d4071fc085354b61adfb202ca1be69fdb9d2c8c8ffa8701c172436.scope: Deactivated successfully.
Dec 09 16:04:52 compute-0 sudo[91969]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:53 compute-0 sudo[91921]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:53 compute-0 sudo[92255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnfdqdpltzutjdwtcahtgkedxcwpwwmh ; /usr/bin/python3'
Dec 09 16:04:53 compute-0 sudo[92255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:53 compute-0 sudo[92227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:53 compute-0 sudo[92227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:53 compute-0 sudo[92227]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:53 compute-0 ceph-mon[75222]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:53 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4157267882' entity='client.admin' 
Dec 09 16:04:53 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:53 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:53 compute-0 sudo[92266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:04:53 compute-0 sudo[92266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:53 compute-0 python3[92263]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:53 compute-0 podman[92291]: 2025-12-09 16:04:53.274675908 +0000 UTC m=+0.063510442 container create eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19 (image=quay.io/ceph/ceph:v20, name=friendly_hellman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:53 compute-0 systemd[1]: Started libpod-conmon-eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19.scope.
Dec 09 16:04:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70921bdddae795918bc08efa9d383f0ad35d413ead015efd5c5e8798d072fa68/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70921bdddae795918bc08efa9d383f0ad35d413ead015efd5c5e8798d072fa68/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70921bdddae795918bc08efa9d383f0ad35d413ead015efd5c5e8798d072fa68/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:53 compute-0 podman[92291]: 2025-12-09 16:04:53.248189683 +0000 UTC m=+0.037024297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:53 compute-0 podman[92291]: 2025-12-09 16:04:53.352629883 +0000 UTC m=+0.141464407 container init eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19 (image=quay.io/ceph/ceph:v20, name=friendly_hellman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:53 compute-0 podman[92291]: 2025-12-09 16:04:53.359872586 +0000 UTC m=+0.148707120 container start eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19 (image=quay.io/ceph/ceph:v20, name=friendly_hellman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:04:53 compute-0 podman[92291]: 2025-12-09 16:04:53.36340312 +0000 UTC m=+0.152237664 container attach eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19 (image=quay.io/ceph/ceph:v20, name=friendly_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:04:53 compute-0 sudo[92266]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:53 compute-0 sudo[92358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:53 compute-0 sudo[92358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:53 compute-0 sudo[92358]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:53 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:04:53 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec 09 16:04:53 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 09 16:04:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 16:04:53 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:53 compute-0 friendly_hellman[92316]: Scheduled rgw.rgw update...
Dec 09 16:04:53 compute-0 systemd[1]: libpod-eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19.scope: Deactivated successfully.
Dec 09 16:04:53 compute-0 podman[92291]: 2025-12-09 16:04:53.800328293 +0000 UTC m=+0.589162817 container died eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19 (image=quay.io/ceph/ceph:v20, name=friendly_hellman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Dec 09 16:04:53 compute-0 sudo[92383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:04:53 compute-0 sudo[92383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-70921bdddae795918bc08efa9d383f0ad35d413ead015efd5c5e8798d072fa68-merged.mount: Deactivated successfully.
Dec 09 16:04:53 compute-0 podman[92291]: 2025-12-09 16:04:53.829800008 +0000 UTC m=+0.618634522 container remove eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19 (image=quay.io/ceph/ceph:v20, name=friendly_hellman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:04:53 compute-0 systemd[1]: libpod-conmon-eb00374c1b573c394e79124f56b630b932eb79362937af98b3c37cd9c930ec19.scope: Deactivated successfully.
Dec 09 16:04:53 compute-0 sudo[92255]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:53 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:04:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:04:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:04:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:04:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:54 compute-0 podman[92434]: 2025-12-09 16:04:54.115323 +0000 UTC m=+0.057972035 container create c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_gates, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:54 compute-0 systemd[1]: Started libpod-conmon-c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d.scope.
Dec 09 16:04:54 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:54 compute-0 podman[92434]: 2025-12-09 16:04:54.092489002 +0000 UTC m=+0.035138117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:54 compute-0 podman[92434]: 2025-12-09 16:04:54.185136159 +0000 UTC m=+0.127785234 container init c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:54 compute-0 podman[92434]: 2025-12-09 16:04:54.19494353 +0000 UTC m=+0.137592605 container start c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_gates, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 09 16:04:54 compute-0 compassionate_gates[92450]: 167 167
Dec 09 16:04:54 compute-0 systemd[1]: libpod-c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d.scope: Deactivated successfully.
Dec 09 16:04:54 compute-0 podman[92434]: 2025-12-09 16:04:54.199492391 +0000 UTC m=+0.142141456 container attach c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_gates, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:54 compute-0 podman[92434]: 2025-12-09 16:04:54.199973854 +0000 UTC m=+0.142622929 container died c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_gates, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f87dc1d3ce0e6f0f704a70d45853c7a8f1bb547f822608805a2419c5ac944f9-merged.mount: Deactivated successfully.
Dec 09 16:04:54 compute-0 podman[92434]: 2025-12-09 16:04:54.256397196 +0000 UTC m=+0.199046241 container remove c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_gates, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:04:54 compute-0 systemd[1]: libpod-conmon-c62d3bc525785304048dd1f5265cea183b8c4d4b0fe029b787c7120b8237796d.scope: Deactivated successfully.
Dec 09 16:04:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:54 compute-0 podman[92474]: 2025-12-09 16:04:54.432461644 +0000 UTC m=+0.036882183 container create 81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:04:54 compute-0 systemd[1]: Started libpod-conmon-81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285.scope.
Dec 09 16:04:54 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579a8296be57e0ff843113c65e486320705814c570e2567e759aff0650e6737e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579a8296be57e0ff843113c65e486320705814c570e2567e759aff0650e6737e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579a8296be57e0ff843113c65e486320705814c570e2567e759aff0650e6737e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579a8296be57e0ff843113c65e486320705814c570e2567e759aff0650e6737e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579a8296be57e0ff843113c65e486320705814c570e2567e759aff0650e6737e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:54 compute-0 podman[92474]: 2025-12-09 16:04:54.416135709 +0000 UTC m=+0.020556268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:54 compute-0 podman[92474]: 2025-12-09 16:04:54.516171922 +0000 UTC m=+0.120592491 container init 81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_dirac, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:04:54 compute-0 podman[92474]: 2025-12-09 16:04:54.524826963 +0000 UTC m=+0.129247502 container start 81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:54 compute-0 podman[92474]: 2025-12-09 16:04:54.527912115 +0000 UTC m=+0.132332654 container attach 81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:04:54 compute-0 python3[92571]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:04:55 compute-0 hungry_dirac[92496]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:04:55 compute-0 hungry_dirac[92496]: --> All data devices are unavailable
Dec 09 16:04:55 compute-0 systemd[1]: libpod-81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285.scope: Deactivated successfully.
Dec 09 16:04:55 compute-0 podman[92474]: 2025-12-09 16:04:55.047254091 +0000 UTC m=+0.651674630 container died 81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:04:55 compute-0 python3[92655]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296294.4949014-36812-185021979949078/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:04:55 compute-0 ceph-mon[75222]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:04:55 compute-0 ceph-mon[75222]: Saving service rgw.rgw spec with placement compute-0
Dec 09 16:04:55 compute-0 ceph-mon[75222]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-579a8296be57e0ff843113c65e486320705814c570e2567e759aff0650e6737e-merged.mount: Deactivated successfully.
Dec 09 16:04:55 compute-0 podman[92474]: 2025-12-09 16:04:55.247093112 +0000 UTC m=+0.851513651 container remove 81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:55 compute-0 systemd[1]: libpod-conmon-81744cb3b605894236731022c1196d74a0922396039428016f45cbb884bad285.scope: Deactivated successfully.
Dec 09 16:04:55 compute-0 sudo[92383]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:55 compute-0 sudo[92697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:55 compute-0 sudo[92697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:55 compute-0 sudo[92697]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:55 compute-0 sudo[92722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:04:55 compute-0 sudo[92722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:55 compute-0 sudo[92770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywbqjiwwsbssxnefxeeinfsxfjslylxy ; /usr/bin/python3'
Dec 09 16:04:55 compute-0 sudo[92770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:55 compute-0 python3[92772]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:55 compute-0 podman[92786]: 2025-12-09 16:04:55.685063102 +0000 UTC m=+0.039590855 container create ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:04:55 compute-0 podman[92788]: 2025-12-09 16:04:55.706326239 +0000 UTC m=+0.048700278 container create acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:04:55 compute-0 systemd[1]: Started libpod-conmon-ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2.scope.
Dec 09 16:04:55 compute-0 systemd[1]: Started libpod-conmon-acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25.scope.
Dec 09 16:04:55 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:55 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a447f41041efa96cce033437e83ea9fc686081d66336bc79cf38c80dc4971e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a447f41041efa96cce033437e83ea9fc686081d66336bc79cf38c80dc4971e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a447f41041efa96cce033437e83ea9fc686081d66336bc79cf38c80dc4971e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:55 compute-0 podman[92786]: 2025-12-09 16:04:55.746675253 +0000 UTC m=+0.101203006 container init ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:04:55 compute-0 podman[92788]: 2025-12-09 16:04:55.752858937 +0000 UTC m=+0.095232996 container init acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 09 16:04:55 compute-0 podman[92786]: 2025-12-09 16:04:55.757391988 +0000 UTC m=+0.111919741 container start ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:55 compute-0 podman[92788]: 2025-12-09 16:04:55.760002698 +0000 UTC m=+0.102376757 container start acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:04:55 compute-0 hopeful_almeida[92817]: 167 167
Dec 09 16:04:55 compute-0 systemd[1]: libpod-ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2.scope: Deactivated successfully.
Dec 09 16:04:55 compute-0 podman[92786]: 2025-12-09 16:04:55.763296225 +0000 UTC m=+0.117823988 container attach ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_almeida, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:04:55 compute-0 podman[92786]: 2025-12-09 16:04:55.668211404 +0000 UTC m=+0.022739177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:55 compute-0 podman[92788]: 2025-12-09 16:04:55.76610518 +0000 UTC m=+0.108479229 container attach acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:55 compute-0 podman[92786]: 2025-12-09 16:04:55.766931622 +0000 UTC m=+0.121459375 container died ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:04:55 compute-0 podman[92788]: 2025-12-09 16:04:55.682558286 +0000 UTC m=+0.024932365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c10f586dfd4bc174169fe53e6e95e6ae2d41cfb7bf1a8215b3e190aa154246-merged.mount: Deactivated successfully.
Dec 09 16:04:55 compute-0 podman[92786]: 2025-12-09 16:04:55.804309047 +0000 UTC m=+0.158836800 container remove ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:04:55 compute-0 systemd[1]: libpod-conmon-ed4107e479b3ff53a86b09609d27d099d200c1444d9de6b48f743a4b09a249f2.scope: Deactivated successfully.
Dec 09 16:04:55 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:55 compute-0 podman[92864]: 2025-12-09 16:04:55.960058364 +0000 UTC m=+0.050115295 container create 97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_nobel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:04:56 compute-0 systemd[1]: Started libpod-conmon-97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a.scope.
Dec 09 16:04:56 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc070944c12ee20f1e5f975383478adea25766e8a1bfc2678e88d08a93b5efe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc070944c12ee20f1e5f975383478adea25766e8a1bfc2678e88d08a93b5efe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc070944c12ee20f1e5f975383478adea25766e8a1bfc2678e88d08a93b5efe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc070944c12ee20f1e5f975383478adea25766e8a1bfc2678e88d08a93b5efe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:56 compute-0 podman[92864]: 2025-12-09 16:04:55.938177351 +0000 UTC m=+0.028234322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:56 compute-0 podman[92864]: 2025-12-09 16:04:56.043755523 +0000 UTC m=+0.133812464 container init 97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 09 16:04:56 compute-0 podman[92864]: 2025-12-09 16:04:56.057237561 +0000 UTC m=+0.147294512 container start 97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:04:56 compute-0 podman[92864]: 2025-12-09 16:04:56.060763725 +0000 UTC m=+0.150820756 container attach 97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 09 16:04:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 09 16:04:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 09 16:04:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 09 16:04:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 09 16:04:56 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0[75218]: 2025-12-09T16:04:56.239+0000 7f4a55048640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 09 16:04:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e2 new map
Dec 09 16:04:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-09T16:04:56:239709+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T16:04:56.239490+0000
                                           modified        2025-12-09T16:04:56.239490+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 09 16:04:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 09 16:04:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 16:04:56 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 09 16:04:56 compute-0 systemd[1]: libpod-acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25.scope: Deactivated successfully.
Dec 09 16:04:56 compute-0 conmon[92819]: conmon acaa32c2072adfdf7c54 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25.scope/container/memory.events
Dec 09 16:04:56 compute-0 podman[92788]: 2025-12-09 16:04:56.27577358 +0000 UTC m=+0.618147639 container died acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9a447f41041efa96cce033437e83ea9fc686081d66336bc79cf38c80dc4971e-merged.mount: Deactivated successfully.
Dec 09 16:04:56 compute-0 podman[92788]: 2025-12-09 16:04:56.312398735 +0000 UTC m=+0.654772774 container remove acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:56 compute-0 eager_nobel[92880]: {
Dec 09 16:04:56 compute-0 eager_nobel[92880]:     "0": [
Dec 09 16:04:56 compute-0 eager_nobel[92880]:         {
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "devices": [
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "/dev/loop3"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             ],
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_name": "ceph_lv0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_size": "21470642176",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "name": "ceph_lv0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "tags": {
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.crush_device_class": "",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.encrypted": "0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osd_id": "0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.type": "block",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.vdo": "0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.with_tpm": "0"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             },
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "type": "block",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "vg_name": "ceph_vg0"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:         }
Dec 09 16:04:56 compute-0 eager_nobel[92880]:     ],
Dec 09 16:04:56 compute-0 eager_nobel[92880]:     "1": [
Dec 09 16:04:56 compute-0 eager_nobel[92880]:         {
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "devices": [
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "/dev/loop4"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             ],
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_name": "ceph_lv1",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_size": "21470642176",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "name": "ceph_lv1",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "tags": {
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.crush_device_class": "",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.encrypted": "0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osd_id": "1",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.type": "block",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.vdo": "0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.with_tpm": "0"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             },
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "type": "block",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "vg_name": "ceph_vg1"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:         }
Dec 09 16:04:56 compute-0 eager_nobel[92880]:     ],
Dec 09 16:04:56 compute-0 eager_nobel[92880]:     "2": [
Dec 09 16:04:56 compute-0 eager_nobel[92880]:         {
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "devices": [
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "/dev/loop5"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             ],
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_name": "ceph_lv2",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_size": "21470642176",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "name": "ceph_lv2",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "tags": {
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.cluster_name": "ceph",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.crush_device_class": "",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.encrypted": "0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.objectstore": "bluestore",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osd_id": "2",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.type": "block",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.vdo": "0",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:                 "ceph.with_tpm": "0"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             },
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "type": "block",
Dec 09 16:04:56 compute-0 eager_nobel[92880]:             "vg_name": "ceph_vg2"
Dec 09 16:04:56 compute-0 eager_nobel[92880]:         }
Dec 09 16:04:56 compute-0 eager_nobel[92880]:     ]
Dec 09 16:04:56 compute-0 eager_nobel[92880]: }
Dec 09 16:04:56 compute-0 sudo[92770]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:56 compute-0 systemd[1]: libpod-conmon-acaa32c2072adfdf7c544dad4a194109ca561b5eab6891173235f9596edb8b25.scope: Deactivated successfully.
Dec 09 16:04:56 compute-0 systemd[1]: libpod-97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a.scope: Deactivated successfully.
Dec 09 16:04:56 compute-0 podman[92864]: 2025-12-09 16:04:56.378041603 +0000 UTC m=+0.468098524 container died 97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_nobel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:04:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:04:56 compute-0 sudo[92935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fperpuidnebrijolxlgqlidtktwikupm ; /usr/bin/python3'
Dec 09 16:04:56 compute-0 sudo[92935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:56 compute-0 python3[92938]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fc070944c12ee20f1e5f975383478adea25766e8a1bfc2678e88d08a93b5efe-merged.mount: Deactivated successfully.
Dec 09 16:04:57 compute-0 podman[92864]: 2025-12-09 16:04:57.123553842 +0000 UTC m=+1.213610803 container remove 97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:04:57 compute-0 systemd[1]: libpod-conmon-97a0e97e87a969e66a7c41dd081c83105855bce709a9b4236c3f6272a8dec86a.scope: Deactivated successfully.
Dec 09 16:04:57 compute-0 sudo[92722]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:57 compute-0 podman[92939]: 2025-12-09 16:04:57.183666652 +0000 UTC m=+0.467483447 container create 731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9 (image=quay.io/ceph/ceph:v20, name=brave_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 16:04:57 compute-0 ceph-mon[75222]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:57 compute-0 ceph-mon[75222]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:04:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 09 16:04:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 09 16:04:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 09 16:04:57 compute-0 ceph-mon[75222]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 09 16:04:57 compute-0 ceph-mon[75222]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 09 16:04:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 09 16:04:57 compute-0 ceph-mon[75222]: osdmap e37: 3 total, 3 up, 3 in
Dec 09 16:04:57 compute-0 ceph-mon[75222]: fsmap cephfs:0
Dec 09 16:04:57 compute-0 ceph-mon[75222]: Saving service mds.cephfs spec with placement compute-0
Dec 09 16:04:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:57 compute-0 systemd[1]: Started libpod-conmon-731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9.scope.
Dec 09 16:04:57 compute-0 sudo[92953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:57 compute-0 sudo[92953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:57 compute-0 sudo[92953]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:57 compute-0 podman[92939]: 2025-12-09 16:04:57.166867075 +0000 UTC m=+0.450683870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee8b29827c2fe777849cde76cbfeb25653e90ce0aca280e565b1682b04d00ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee8b29827c2fe777849cde76cbfeb25653e90ce0aca280e565b1682b04d00ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee8b29827c2fe777849cde76cbfeb25653e90ce0aca280e565b1682b04d00ac/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:57 compute-0 sudo[92983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:04:57 compute-0 sudo[92983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:57 compute-0 podman[92939]: 2025-12-09 16:04:57.400256169 +0000 UTC m=+0.684072984 container init 731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9 (image=quay.io/ceph/ceph:v20, name=brave_raman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:57 compute-0 podman[92939]: 2025-12-09 16:04:57.409007212 +0000 UTC m=+0.692824007 container start 731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9 (image=quay.io/ceph/ceph:v20, name=brave_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:57 compute-0 podman[92939]: 2025-12-09 16:04:57.470940381 +0000 UTC m=+0.754757186 container attach 731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9 (image=quay.io/ceph/ceph:v20, name=brave_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:04:57 compute-0 podman[93030]: 2025-12-09 16:04:57.570836331 +0000 UTC m=+0.039587575 container create cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:57 compute-0 systemd[1]: Started libpod-conmon-cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45.scope.
Dec 09 16:04:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:57 compute-0 podman[93030]: 2025-12-09 16:04:57.641029339 +0000 UTC m=+0.109780603 container init cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:57 compute-0 podman[93030]: 2025-12-09 16:04:57.645805277 +0000 UTC m=+0.114556501 container start cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:04:57 compute-0 podman[93030]: 2025-12-09 16:04:57.648868588 +0000 UTC m=+0.117619842 container attach cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:57 compute-0 amazing_keller[93056]: 167 167
Dec 09 16:04:57 compute-0 systemd[1]: libpod-cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45.scope: Deactivated successfully.
Dec 09 16:04:57 compute-0 podman[93030]: 2025-12-09 16:04:57.650851161 +0000 UTC m=+0.119602395 container died cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:04:57 compute-0 podman[93030]: 2025-12-09 16:04:57.555603535 +0000 UTC m=+0.024354789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-371d15a5e242e2c605e8eccb5b479a7d9cac1ba96b8f2d045b07d06b6a003c16-merged.mount: Deactivated successfully.
Dec 09 16:04:57 compute-0 podman[93030]: 2025-12-09 16:04:57.687012124 +0000 UTC m=+0.155763358 container remove cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:57 compute-0 systemd[1]: libpod-conmon-cfdd64e7736ac5857dc896609e54a7033b9cb2f780dd30c821557d4772d8ee45.scope: Deactivated successfully.
Dec 09 16:04:57 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:04:57 compute-0 ceph-mgr[75515]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 09 16:04:57 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 09 16:04:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 16:04:57 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:57 compute-0 brave_raman[92979]: Scheduled mds.cephfs update...
Dec 09 16:04:57 compute-0 systemd[1]: libpod-731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9.scope: Deactivated successfully.
Dec 09 16:04:57 compute-0 podman[92939]: 2025-12-09 16:04:57.862308961 +0000 UTC m=+1.146125756 container died 731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9 (image=quay.io/ceph/ceph:v20, name=brave_raman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:04:57 compute-0 podman[93079]: 2025-12-09 16:04:57.875122902 +0000 UTC m=+0.055149429 container create 9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_ganguly, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:04:57 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ee8b29827c2fe777849cde76cbfeb25653e90ce0aca280e565b1682b04d00ac-merged.mount: Deactivated successfully.
Dec 09 16:04:57 compute-0 podman[92939]: 2025-12-09 16:04:57.90885166 +0000 UTC m=+1.192668455 container remove 731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9 (image=quay.io/ceph/ceph:v20, name=brave_raman, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:04:57 compute-0 systemd[1]: Started libpod-conmon-9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57.scope.
Dec 09 16:04:57 compute-0 systemd[1]: libpod-conmon-731fa7c2cf268e0abfeb21aef1a372ed5de67c3689c9e674df3855004c7d96f9.scope: Deactivated successfully.
Dec 09 16:04:57 compute-0 sudo[92935]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:57 compute-0 podman[93079]: 2025-12-09 16:04:57.84988872 +0000 UTC m=+0.029915297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:04:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61afcb2fbe419d29ef5e95bbd917b6ff36a0307cc3869a71cff591a944f894a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61afcb2fbe419d29ef5e95bbd917b6ff36a0307cc3869a71cff591a944f894a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61afcb2fbe419d29ef5e95bbd917b6ff36a0307cc3869a71cff591a944f894a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61afcb2fbe419d29ef5e95bbd917b6ff36a0307cc3869a71cff591a944f894a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:57 compute-0 podman[93079]: 2025-12-09 16:04:57.96254384 +0000 UTC m=+0.142570397 container init 9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:04:57 compute-0 podman[93079]: 2025-12-09 16:04:57.970307216 +0000 UTC m=+0.150333743 container start 9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_ganguly, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:04:57 compute-0 podman[93079]: 2025-12-09 16:04:57.97345167 +0000 UTC m=+0.153478227 container attach 9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:04:58 compute-0 sudo[93226]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icufwzfnkfjfqlohdcvajzoligxotubz ; /usr/bin/python3'
Dec 09 16:04:58 compute-0 sudo[93226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:58 compute-0 python3[93233]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 16:04:58 compute-0 sudo[93226]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:58 compute-0 lvm[93283]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:04:58 compute-0 lvm[93285]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:04:58 compute-0 lvm[93285]: VG ceph_vg1 finished
Dec 09 16:04:58 compute-0 lvm[93283]: VG ceph_vg0 finished
Dec 09 16:04:58 compute-0 lvm[93291]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:04:58 compute-0 lvm[93291]: VG ceph_vg2 finished
Dec 09 16:04:58 compute-0 sudo[93341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkiwmmprpjzvzjlvghpewoawpilkcpnj ; /usr/bin/python3'
Dec 09 16:04:58 compute-0 sudo[93341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:58 compute-0 sweet_ganguly[93109]: {}
Dec 09 16:04:58 compute-0 systemd[1]: libpod-9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57.scope: Deactivated successfully.
Dec 09 16:04:58 compute-0 systemd[1]: libpod-9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57.scope: Consumed 1.428s CPU time.
Dec 09 16:04:58 compute-0 podman[93079]: 2025-12-09 16:04:58.822519335 +0000 UTC m=+1.002545872 container died 9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:04:58 compute-0 ceph-mon[75222]: from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:04:58 compute-0 ceph-mon[75222]: Saving service mds.cephfs spec with placement compute-0
Dec 09 16:04:58 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:58 compute-0 ceph-mon[75222]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-61afcb2fbe419d29ef5e95bbd917b6ff36a0307cc3869a71cff591a944f894a7-merged.mount: Deactivated successfully.
Dec 09 16:04:58 compute-0 podman[93079]: 2025-12-09 16:04:58.868099549 +0000 UTC m=+1.048126076 container remove 9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:58 compute-0 systemd[1]: libpod-conmon-9071b17a9af55abd557466d1fd284276b87416b06707cddff66e3f0d26817e57.scope: Deactivated successfully.
Dec 09 16:04:58 compute-0 sudo[92983]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:04:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:04:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:58 compute-0 python3[93343]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296298.2989588-36842-10054056815613/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=32b98c53b5e04ff0e2611789fc8ffea1fdf91cb2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:04:58 compute-0 sudo[93341]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:58 compute-0 sudo[93357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:04:58 compute-0 sudo[93357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:58 compute-0 sudo[93357]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:59 compute-0 sudo[93382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:04:59 compute-0 sudo[93382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:59 compute-0 sudo[93382]: pam_unix(sudo:session): session closed for user root
Dec 09 16:04:59 compute-0 sudo[93431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:04:59 compute-0 sudo[93431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:04:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:04:59 compute-0 sudo[93486]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymfsxniyymqyhgvryjzdxxknldggvytm ; /usr/bin/python3'
Dec 09 16:04:59 compute-0 sudo[93486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:04:59 compute-0 python3[93492]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:04:59 compute-0 podman[93517]: 2025-12-09 16:04:59.536582497 +0000 UTC m=+0.052005506 container create 501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790 (image=quay.io/ceph/ceph:v20, name=ecstatic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:04:59 compute-0 systemd[1]: Started libpod-conmon-501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790.scope.
Dec 09 16:04:59 compute-0 podman[93536]: 2025-12-09 16:04:59.59943902 +0000 UTC m=+0.077948206 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:59 compute-0 podman[93517]: 2025-12-09 16:04:59.50892671 +0000 UTC m=+0.024349799 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:04:59 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e373e239a249522d4b0c47bd67ddab337198df7156edc99bf6adca07dbc0a92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e373e239a249522d4b0c47bd67ddab337198df7156edc99bf6adca07dbc0a92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:04:59 compute-0 podman[93517]: 2025-12-09 16:04:59.643464112 +0000 UTC m=+0.158887141 container init 501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790 (image=quay.io/ceph/ceph:v20, name=ecstatic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:04:59 compute-0 podman[93517]: 2025-12-09 16:04:59.655363909 +0000 UTC m=+0.170786928 container start 501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790 (image=quay.io/ceph/ceph:v20, name=ecstatic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:04:59 compute-0 podman[93517]: 2025-12-09 16:04:59.659982522 +0000 UTC m=+0.175405531 container attach 501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790 (image=quay.io/ceph/ceph:v20, name=ecstatic_einstein, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:04:59 compute-0 podman[93536]: 2025-12-09 16:04:59.691416889 +0000 UTC m=+0.169926095 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:04:59 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:04:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:04:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:00 compute-0 sshd-session[93578]: Invalid user admin from 146.190.31.45 port 52768
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2504529590' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2504529590' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 09 16:05:00 compute-0 systemd[1]: libpod-501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790.scope: Deactivated successfully.
Dec 09 16:05:00 compute-0 podman[93517]: 2025-12-09 16:05:00.183008528 +0000 UTC m=+0.698431547 container died 501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790 (image=quay.io/ceph/ceph:v20, name=ecstatic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:05:00 compute-0 sshd-session[93578]: Connection closed by invalid user admin 146.190.31.45 port 52768 [preauth]
Dec 09 16:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e373e239a249522d4b0c47bd67ddab337198df7156edc99bf6adca07dbc0a92-merged.mount: Deactivated successfully.
Dec 09 16:05:00 compute-0 podman[93517]: 2025-12-09 16:05:00.231256742 +0000 UTC m=+0.746679741 container remove 501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790 (image=quay.io/ceph/ceph:v20, name=ecstatic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:05:00 compute-0 systemd[1]: libpod-conmon-501cf5becd53c9237f244ea035b4a62de7a8f520b25f1ea4f62d18837ef33790.scope: Deactivated successfully.
Dec 09 16:05:00 compute-0 sudo[93486]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:00 compute-0 sudo[93431]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:05:00 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:00 compute-0 sudo[93719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:00 compute-0 sudo[93719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:00 compute-0 sudo[93719]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:00 compute-0 sudo[93744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:05:00 compute-0 sudo[93744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:00 compute-0 podman[93781]: 2025-12-09 16:05:00.769974685 +0000 UTC m=+0.036968115 container create c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 09 16:05:00 compute-0 systemd[1]: Started libpod-conmon-c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb.scope.
Dec 09 16:05:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:00 compute-0 podman[93781]: 2025-12-09 16:05:00.843867273 +0000 UTC m=+0.110860713 container init c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:00 compute-0 podman[93781]: 2025-12-09 16:05:00.849845142 +0000 UTC m=+0.116838572 container start c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:00 compute-0 podman[93781]: 2025-12-09 16:05:00.753803355 +0000 UTC m=+0.020796815 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:00 compute-0 podman[93781]: 2025-12-09 16:05:00.853412067 +0000 UTC m=+0.120405517 container attach c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:00 compute-0 laughing_tharp[93798]: 167 167
Dec 09 16:05:00 compute-0 systemd[1]: libpod-c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb.scope: Deactivated successfully.
Dec 09 16:05:00 compute-0 conmon[93798]: conmon c7230408294e010adfb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb.scope/container/memory.events
Dec 09 16:05:00 compute-0 podman[93781]: 2025-12-09 16:05:00.855347008 +0000 UTC m=+0.122340448 container died c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-df8e1b15858f80e73dfd24328dcd390d8f60a0d0b27d21bab39750a43962f6e4-merged.mount: Deactivated successfully.
Dec 09 16:05:00 compute-0 sudo[93833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqhckjnzcndekgwrlseidffpjqdouha ; /usr/bin/python3'
Dec 09 16:05:00 compute-0 sudo[93833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:00 compute-0 podman[93781]: 2025-12-09 16:05:00.895527548 +0000 UTC m=+0.162520988 container remove c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 16:05:00 compute-0 systemd[1]: libpod-conmon-c7230408294e010adfb2812da8f05548d062261c4f8ab20b867d2e8377fbaddb.scope: Deactivated successfully.
Dec 09 16:05:00 compute-0 ceph-mon[75222]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2504529590' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2504529590' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:05:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:01 compute-0 python3[93840]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:01 compute-0 podman[93848]: 2025-12-09 16:05:01.063086649 +0000 UTC m=+0.043469808 container create 218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 09 16:05:01 compute-0 systemd[1]: Started libpod-conmon-218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97.scope.
Dec 09 16:05:01 compute-0 podman[93851]: 2025-12-09 16:05:01.109816143 +0000 UTC m=+0.076436236 container create ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142 (image=quay.io/ceph/ceph:v20, name=vibrant_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:05:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:01 compute-0 podman[93848]: 2025-12-09 16:05:01.044963847 +0000 UTC m=+0.025347066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae4fabcb9f0f79554265e95458b098b26581d56ffae38a521e7226321413611/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae4fabcb9f0f79554265e95458b098b26581d56ffae38a521e7226321413611/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae4fabcb9f0f79554265e95458b098b26581d56ffae38a521e7226321413611/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae4fabcb9f0f79554265e95458b098b26581d56ffae38a521e7226321413611/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae4fabcb9f0f79554265e95458b098b26581d56ffae38a521e7226321413611/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:01 compute-0 systemd[1]: Started libpod-conmon-ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142.scope.
Dec 09 16:05:01 compute-0 podman[93848]: 2025-12-09 16:05:01.165903667 +0000 UTC m=+0.146286846 container init 218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feistel, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:01 compute-0 podman[93851]: 2025-12-09 16:05:01.078665474 +0000 UTC m=+0.045285617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:01 compute-0 podman[93848]: 2025-12-09 16:05:01.171782113 +0000 UTC m=+0.152165302 container start 218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feistel, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 09 16:05:01 compute-0 podman[93848]: 2025-12-09 16:05:01.175018519 +0000 UTC m=+0.155401688 container attach 218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:05:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bc6aff4ffa819f2f0037f542e74e9cce0534b900888f3247d7f0d1d8ddfb65/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bc6aff4ffa819f2f0037f542e74e9cce0534b900888f3247d7f0d1d8ddfb65/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:01 compute-0 podman[93851]: 2025-12-09 16:05:01.193190023 +0000 UTC m=+0.159810106 container init ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142 (image=quay.io/ceph/ceph:v20, name=vibrant_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:05:01 compute-0 podman[93851]: 2025-12-09 16:05:01.198214527 +0000 UTC m=+0.164834580 container start ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142 (image=quay.io/ceph/ceph:v20, name=vibrant_banzai, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:01 compute-0 podman[93851]: 2025-12-09 16:05:01.201400082 +0000 UTC m=+0.168020245 container attach ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142 (image=quay.io/ceph/ceph:v20, name=vibrant_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:01 compute-0 amazing_feistel[93879]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:05:01 compute-0 amazing_feistel[93879]: --> All data devices are unavailable
Dec 09 16:05:01 compute-0 systemd[1]: libpod-218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97.scope: Deactivated successfully.
Dec 09 16:05:01 compute-0 podman[93848]: 2025-12-09 16:05:01.705443822 +0000 UTC m=+0.685827061 container died 218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 09 16:05:01 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286811873' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 09 16:05:01 compute-0 vibrant_banzai[93884]: 
Dec 09 16:05:01 compute-0 vibrant_banzai[93884]: {"fsid":"67f67f44-54fc-54ea-8df0-10931b6ecdaf","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":117,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":37,"num_osds":3,"num_up_osds":3,"osd_up_since":1765296267,"num_in_osds":3,"osd_in_since":1765296245,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84017152,"bytes_avail":64327909376,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-09T16:04:56:239709+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-09T16:04:27.868520+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 09 16:05:01 compute-0 systemd[1]: libpod-ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142.scope: Deactivated successfully.
Dec 09 16:05:01 compute-0 podman[93851]: 2025-12-09 16:05:01.725522746 +0000 UTC m=+0.692142809 container died ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142 (image=quay.io/ceph/ceph:v20, name=vibrant_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae4fabcb9f0f79554265e95458b098b26581d56ffae38a521e7226321413611-merged.mount: Deactivated successfully.
Dec 09 16:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-94bc6aff4ffa819f2f0037f542e74e9cce0534b900888f3247d7f0d1d8ddfb65-merged.mount: Deactivated successfully.
Dec 09 16:05:01 compute-0 podman[93848]: 2025-12-09 16:05:01.765285155 +0000 UTC m=+0.745668334 container remove 218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:01 compute-0 podman[93851]: 2025-12-09 16:05:01.782480643 +0000 UTC m=+0.749100696 container remove ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142 (image=quay.io/ceph/ceph:v20, name=vibrant_banzai, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 09 16:05:01 compute-0 systemd[1]: libpod-conmon-ef945b3812b59c0fec7c81dbee225e40f526ee6089801cd922c579f9f3481142.scope: Deactivated successfully.
Dec 09 16:05:01 compute-0 systemd[1]: libpod-conmon-218a5ce8eb3383c83455bfc2701b6e557a3039b0a67ba1fed777d91fafb6af97.scope: Deactivated successfully.
Dec 09 16:05:01 compute-0 sudo[93744]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:01 compute-0 sudo[93833]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:01 compute-0 sudo[93951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:01 compute-0 sudo[93951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:01 compute-0 sudo[93951]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:01 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:01 compute-0 sudo[93976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:05:01 compute-0 sudo[93976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:01 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3286811873' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 09 16:05:01 compute-0 sudo[94024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scldtgnswdclpspppdounxtlsxtgwzgp ; /usr/bin/python3'
Dec 09 16:05:01 compute-0 sudo[94024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:02 compute-0 python3[94026]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:02 compute-0 podman[94027]: 2025-12-09 16:05:02.194543833 +0000 UTC m=+0.066060730 container create 3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26 (image=quay.io/ceph/ceph:v20, name=kind_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:05:02 compute-0 systemd[1]: Started libpod-conmon-3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26.scope.
Dec 09 16:05:02 compute-0 podman[94027]: 2025-12-09 16:05:02.170376899 +0000 UTC m=+0.041893886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:02 compute-0 podman[94047]: 2025-12-09 16:05:02.262846791 +0000 UTC m=+0.051175723 container create 4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goodall, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:05:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f415220bf8e091ee7b730e5803837fa5beda6d28f9674ffc86dad296d878ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f415220bf8e091ee7b730e5803837fa5beda6d28f9674ffc86dad296d878ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:02 compute-0 systemd[1]: Started libpod-conmon-4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7.scope.
Dec 09 16:05:02 compute-0 podman[94027]: 2025-12-09 16:05:02.291603737 +0000 UTC m=+0.163120704 container init 3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26 (image=quay.io/ceph/ceph:v20, name=kind_mirzakhani, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:05:02 compute-0 podman[94027]: 2025-12-09 16:05:02.300024541 +0000 UTC m=+0.171541458 container start 3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26 (image=quay.io/ceph/ceph:v20, name=kind_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 09 16:05:02 compute-0 podman[94027]: 2025-12-09 16:05:02.304502 +0000 UTC m=+0.176018917 container attach 3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26 (image=quay.io/ceph/ceph:v20, name=kind_mirzakhani, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:02 compute-0 podman[94047]: 2025-12-09 16:05:02.325176231 +0000 UTC m=+0.113505213 container init 4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:05:02 compute-0 podman[94047]: 2025-12-09 16:05:02.237584949 +0000 UTC m=+0.025913931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:02 compute-0 podman[94047]: 2025-12-09 16:05:02.334488429 +0000 UTC m=+0.122817371 container start 4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:05:02 compute-0 podman[94047]: 2025-12-09 16:05:02.338510676 +0000 UTC m=+0.126839618 container attach 4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goodall, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:02 compute-0 boring_goodall[94070]: 167 167
Dec 09 16:05:02 compute-0 systemd[1]: libpod-4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7.scope: Deactivated successfully.
Dec 09 16:05:02 compute-0 podman[94047]: 2025-12-09 16:05:02.341076944 +0000 UTC m=+0.129405886 container died 4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cee077137dcfc651bc6dd9dd0d4ee08c8e723c39f967885b4c225bf285e8bd71-merged.mount: Deactivated successfully.
Dec 09 16:05:02 compute-0 podman[94047]: 2025-12-09 16:05:02.38111575 +0000 UTC m=+0.169444702 container remove 4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:02 compute-0 systemd[1]: libpod-conmon-4dd32b0f860891dc86689ec374914699c1cd2975a0e80d65ee66324b4a43a2f7.scope: Deactivated successfully.
Dec 09 16:05:02 compute-0 podman[94114]: 2025-12-09 16:05:02.592825107 +0000 UTC m=+0.051894813 container create 215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:05:02 compute-0 systemd[1]: Started libpod-conmon-215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540.scope.
Dec 09 16:05:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abfa5a10fcd9e09475abc39a07f2876e80b4248b6ef5bf5649889731a13c9a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abfa5a10fcd9e09475abc39a07f2876e80b4248b6ef5bf5649889731a13c9a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abfa5a10fcd9e09475abc39a07f2876e80b4248b6ef5bf5649889731a13c9a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abfa5a10fcd9e09475abc39a07f2876e80b4248b6ef5bf5649889731a13c9a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:02 compute-0 podman[94114]: 2025-12-09 16:05:02.577309144 +0000 UTC m=+0.036378820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:02 compute-0 podman[94114]: 2025-12-09 16:05:02.684516948 +0000 UTC m=+0.143586684 container init 215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 16:05:02 compute-0 podman[94114]: 2025-12-09 16:05:02.691991087 +0000 UTC m=+0.151060773 container start 215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:02 compute-0 podman[94114]: 2025-12-09 16:05:02.695584113 +0000 UTC m=+0.154653809 container attach 215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 09 16:05:02 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579071078' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 09 16:05:02 compute-0 kind_mirzakhani[94061]: 
Dec 09 16:05:02 compute-0 kind_mirzakhani[94061]: {"epoch":1,"fsid":"67f67f44-54fc-54ea-8df0-10931b6ecdaf","modified":"2025-12-09T16:02:59.529794Z","created":"2025-12-09T16:02:59.529794Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 09 16:05:02 compute-0 kind_mirzakhani[94061]: dumped monmap epoch 1
Dec 09 16:05:02 compute-0 systemd[1]: libpod-3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26.scope: Deactivated successfully.
Dec 09 16:05:02 compute-0 podman[94027]: 2025-12-09 16:05:02.830611398 +0000 UTC m=+0.702128315 container died 3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26 (image=quay.io/ceph/ceph:v20, name=kind_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-40f415220bf8e091ee7b730e5803837fa5beda6d28f9674ffc86dad296d878ec-merged.mount: Deactivated successfully.
Dec 09 16:05:02 compute-0 podman[94027]: 2025-12-09 16:05:02.869658157 +0000 UTC m=+0.741175074 container remove 3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26 (image=quay.io/ceph/ceph:v20, name=kind_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:05:02 compute-0 systemd[1]: libpod-conmon-3fc0cbc5240540b87cef1caf14fc878c6076a13d44ef92d0208ef55052373c26.scope: Deactivated successfully.
Dec 09 16:05:02 compute-0 sudo[94024]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:02 compute-0 ceph-mon[75222]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:02 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1579071078' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 09 16:05:02 compute-0 elegant_easley[94131]: {
Dec 09 16:05:02 compute-0 elegant_easley[94131]:     "0": [
Dec 09 16:05:02 compute-0 elegant_easley[94131]:         {
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "devices": [
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "/dev/loop3"
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             ],
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_name": "ceph_lv0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_size": "21470642176",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "name": "ceph_lv0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "tags": {
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.crush_device_class": "",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.encrypted": "0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.osd_id": "0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.type": "block",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.vdo": "0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.with_tpm": "0"
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             },
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "type": "block",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "vg_name": "ceph_vg0"
Dec 09 16:05:02 compute-0 elegant_easley[94131]:         }
Dec 09 16:05:02 compute-0 elegant_easley[94131]:     ],
Dec 09 16:05:02 compute-0 elegant_easley[94131]:     "1": [
Dec 09 16:05:02 compute-0 elegant_easley[94131]:         {
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "devices": [
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "/dev/loop4"
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             ],
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_name": "ceph_lv1",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_size": "21470642176",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "name": "ceph_lv1",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             "tags": {
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.crush_device_class": "",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.encrypted": "0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.osd_id": "1",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.type": "block",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.vdo": "0",
Dec 09 16:05:02 compute-0 elegant_easley[94131]:                 "ceph.with_tpm": "0"
Dec 09 16:05:02 compute-0 elegant_easley[94131]:             },
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "type": "block",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "vg_name": "ceph_vg1"
Dec 09 16:05:03 compute-0 elegant_easley[94131]:         }
Dec 09 16:05:03 compute-0 elegant_easley[94131]:     ],
Dec 09 16:05:03 compute-0 elegant_easley[94131]:     "2": [
Dec 09 16:05:03 compute-0 elegant_easley[94131]:         {
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "devices": [
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "/dev/loop5"
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             ],
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "lv_name": "ceph_lv2",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "lv_size": "21470642176",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "name": "ceph_lv2",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "tags": {
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.crush_device_class": "",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.encrypted": "0",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.osd_id": "2",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.type": "block",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.vdo": "0",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:                 "ceph.with_tpm": "0"
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             },
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "type": "block",
Dec 09 16:05:03 compute-0 elegant_easley[94131]:             "vg_name": "ceph_vg2"
Dec 09 16:05:03 compute-0 elegant_easley[94131]:         }
Dec 09 16:05:03 compute-0 elegant_easley[94131]:     ]
Dec 09 16:05:03 compute-0 elegant_easley[94131]: }
Dec 09 16:05:03 compute-0 systemd[1]: libpod-215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540.scope: Deactivated successfully.
Dec 09 16:05:03 compute-0 podman[94114]: 2025-12-09 16:05:03.037226319 +0000 UTC m=+0.496296025 container died 215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3abfa5a10fcd9e09475abc39a07f2876e80b4248b6ef5bf5649889731a13c9a7-merged.mount: Deactivated successfully.
Dec 09 16:05:03 compute-0 podman[94114]: 2025-12-09 16:05:03.09211058 +0000 UTC m=+0.551180256 container remove 215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:03 compute-0 systemd[1]: libpod-conmon-215c559867c62438875e1f453f46e933745803fc77bc1fbe460b8fcd19828540.scope: Deactivated successfully.
Dec 09 16:05:03 compute-0 sudo[93976]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:03 compute-0 sudo[94166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:03 compute-0 sudo[94166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:03 compute-0 sudo[94166]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:03 compute-0 sudo[94191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:05:03 compute-0 sudo[94191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:03 compute-0 sudo[94239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibtczsolxtgccatyycclcgkdozmascev ; /usr/bin/python3'
Dec 09 16:05:03 compute-0 sudo[94239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:03 compute-0 python3[94241]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:03 compute-0 podman[94244]: 2025-12-09 16:05:03.573930928 +0000 UTC m=+0.049850248 container create 21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb (image=quay.io/ceph/ceph:v20, name=unruffled_liskov, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:05:03 compute-0 systemd[1]: Started libpod-conmon-21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb.scope.
Dec 09 16:05:03 compute-0 podman[94268]: 2025-12-09 16:05:03.623591921 +0000 UTC m=+0.046977992 container create c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_blackwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:05:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:03 compute-0 podman[94244]: 2025-12-09 16:05:03.549988431 +0000 UTC m=+0.025907761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881cd59a641da24630657d2b16a75626db7d182ca7b0a473e1ee3aec0ba2c8bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881cd59a641da24630657d2b16a75626db7d182ca7b0a473e1ee3aec0ba2c8bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:03 compute-0 systemd[1]: Started libpod-conmon-c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659.scope.
Dec 09 16:05:03 compute-0 podman[94244]: 2025-12-09 16:05:03.670074128 +0000 UTC m=+0.145993438 container init 21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb (image=quay.io/ceph/ceph:v20, name=unruffled_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:05:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:03 compute-0 podman[94244]: 2025-12-09 16:05:03.677436504 +0000 UTC m=+0.153355814 container start 21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb (image=quay.io/ceph/ceph:v20, name=unruffled_liskov, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:05:03 compute-0 podman[94244]: 2025-12-09 16:05:03.682677264 +0000 UTC m=+0.158596584 container attach 21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb (image=quay.io/ceph/ceph:v20, name=unruffled_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:03 compute-0 podman[94268]: 2025-12-09 16:05:03.68855957 +0000 UTC m=+0.111945631 container init c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:05:03 compute-0 podman[94268]: 2025-12-09 16:05:03.693215514 +0000 UTC m=+0.116601545 container start c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:03 compute-0 podman[94268]: 2025-12-09 16:05:03.696393619 +0000 UTC m=+0.119779690 container attach c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:05:03 compute-0 xenodochial_blackwell[94289]: 167 167
Dec 09 16:05:03 compute-0 systemd[1]: libpod-c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659.scope: Deactivated successfully.
Dec 09 16:05:03 compute-0 podman[94268]: 2025-12-09 16:05:03.697820817 +0000 UTC m=+0.121206848 container died c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:03 compute-0 podman[94268]: 2025-12-09 16:05:03.603941987 +0000 UTC m=+0.027328038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:03 compute-0 podman[94268]: 2025-12-09 16:05:03.734930236 +0000 UTC m=+0.158316267 container remove c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_blackwell, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8ff96269f42417bb7f779218ce4306449d5087b735f90798607be94fe48cb3e-merged.mount: Deactivated successfully.
Dec 09 16:05:03 compute-0 systemd[1]: libpod-conmon-c9bf8edf5d565532261e6c0ac327f61efc0715c446c931778cc3e39f6bfac659.scope: Deactivated successfully.
Dec 09 16:05:03 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:03 compute-0 podman[94332]: 2025-12-09 16:05:03.884559969 +0000 UTC m=+0.040219832 container create 5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_leakey, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:05:03 compute-0 systemd[1]: Started libpod-conmon-5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd.scope.
Dec 09 16:05:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64921e03eb9e4880606d9782b810c9f2db3fbc2d67fb3ff33ac8c8b24401ad5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64921e03eb9e4880606d9782b810c9f2db3fbc2d67fb3ff33ac8c8b24401ad5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64921e03eb9e4880606d9782b810c9f2db3fbc2d67fb3ff33ac8c8b24401ad5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64921e03eb9e4880606d9782b810c9f2db3fbc2d67fb3ff33ac8c8b24401ad5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:03 compute-0 podman[94332]: 2025-12-09 16:05:03.866746265 +0000 UTC m=+0.022406158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:03 compute-0 podman[94332]: 2025-12-09 16:05:03.970348573 +0000 UTC m=+0.126008486 container init 5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_leakey, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 16:05:03 compute-0 podman[94332]: 2025-12-09 16:05:03.978366597 +0000 UTC m=+0.134026460 container start 5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_leakey, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:05:03 compute-0 podman[94332]: 2025-12-09 16:05:03.986041551 +0000 UTC m=+0.141701464 container attach 5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 09 16:05:04 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2512318397' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 09 16:05:04 compute-0 unruffled_liskov[94283]: [client.openstack]
Dec 09 16:05:04 compute-0 unruffled_liskov[94283]:         key = AQANSDhpAAAAABAAVF61qpb1VLD1uojiB4Gqaw==
Dec 09 16:05:04 compute-0 unruffled_liskov[94283]:         caps mgr = "allow *"
Dec 09 16:05:04 compute-0 unruffled_liskov[94283]:         caps mon = "profile rbd"
Dec 09 16:05:04 compute-0 unruffled_liskov[94283]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 09 16:05:04 compute-0 systemd[1]: libpod-21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb.scope: Deactivated successfully.
Dec 09 16:05:04 compute-0 conmon[94283]: conmon 21c952461429c648e745 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb.scope/container/memory.events
Dec 09 16:05:04 compute-0 podman[94244]: 2025-12-09 16:05:04.214426422 +0000 UTC m=+0.690345732 container died 21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb (image=quay.io/ceph/ceph:v20, name=unruffled_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-881cd59a641da24630657d2b16a75626db7d182ca7b0a473e1ee3aec0ba2c8bd-merged.mount: Deactivated successfully.
Dec 09 16:05:04 compute-0 podman[94244]: 2025-12-09 16:05:04.250290327 +0000 UTC m=+0.726209647 container remove 21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb (image=quay.io/ceph/ceph:v20, name=unruffled_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:04 compute-0 systemd[1]: libpod-conmon-21c952461429c648e7453b18a37961cfeb3adbd77691277a8709fe350d1bb8cb.scope: Deactivated successfully.
Dec 09 16:05:04 compute-0 sudo[94239]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:04 compute-0 lvm[94442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:05:04 compute-0 lvm[94441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:05:04 compute-0 lvm[94442]: VG ceph_vg1 finished
Dec 09 16:05:04 compute-0 lvm[94441]: VG ceph_vg0 finished
Dec 09 16:05:04 compute-0 lvm[94444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:05:04 compute-0 lvm[94444]: VG ceph_vg2 finished
Dec 09 16:05:04 compute-0 festive_leakey[94349]: {}
Dec 09 16:05:04 compute-0 systemd[1]: libpod-5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd.scope: Deactivated successfully.
Dec 09 16:05:04 compute-0 podman[94332]: 2025-12-09 16:05:04.793442498 +0000 UTC m=+0.949102371 container died 5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_leakey, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:04 compute-0 systemd[1]: libpod-5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd.scope: Consumed 1.253s CPU time.
Dec 09 16:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a64921e03eb9e4880606d9782b810c9f2db3fbc2d67fb3ff33ac8c8b24401ad5-merged.mount: Deactivated successfully.
Dec 09 16:05:04 compute-0 podman[94332]: 2025-12-09 16:05:04.851656788 +0000 UTC m=+1.007316691 container remove 5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:05:04 compute-0 systemd[1]: libpod-conmon-5ab90623e6e0f2181ca5f173f83f45df3f52bc98b5341090ad4591aba57a11bd.scope: Deactivated successfully.
Dec 09 16:05:04 compute-0 sudo[94191]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:04 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:04 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:04 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev 2d88a8bc-6d32-4724-8966-3ba8d5270ffa (Updating rgw.rgw deployment (+1 -> 1))
Dec 09 16:05:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.efuxpz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 09 16:05:04 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.efuxpz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 09 16:05:04 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.efuxpz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 16:05:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 09 16:05:04 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:05:04 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:04 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.efuxpz on compute-0
Dec 09 16:05:04 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.efuxpz on compute-0
Dec 09 16:05:04 compute-0 ceph-mon[75222]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:04 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2512318397' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 09 16:05:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.efuxpz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 09 16:05:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.efuxpz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 16:05:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:05 compute-0 sudo[94461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:05 compute-0 sudo[94461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:05 compute-0 sudo[94461]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:05 compute-0 sudo[94486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:05:05 compute-0 sudo[94486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:05 compute-0 sudo[94709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrmehioppvswzfefvvfybgawfxgypqlm ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765296305.223745-36916-258900288088673/async_wrapper.py j9958793940 30 /home/zuul/.ansible/tmp/ansible-tmp-1765296305.223745-36916-258900288088673/AnsiballZ_command.py _'
Dec 09 16:05:05 compute-0 podman[94675]: 2025-12-09 16:05:05.505334701 +0000 UTC m=+0.036257746 container create 96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:05:05 compute-0 sudo[94709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:05 compute-0 systemd[1]: Started libpod-conmon-96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8.scope.
Dec 09 16:05:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:05 compute-0 podman[94675]: 2025-12-09 16:05:05.574136003 +0000 UTC m=+0.105059068 container init 96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dewdney, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:05:05 compute-0 podman[94675]: 2025-12-09 16:05:05.580615806 +0000 UTC m=+0.111538851 container start 96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:05:05 compute-0 funny_dewdney[94714]: 167 167
Dec 09 16:05:05 compute-0 systemd[1]: libpod-96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8.scope: Deactivated successfully.
Dec 09 16:05:05 compute-0 podman[94675]: 2025-12-09 16:05:05.489244873 +0000 UTC m=+0.020167938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:05 compute-0 podman[94675]: 2025-12-09 16:05:05.585082385 +0000 UTC m=+0.116005460 container attach 96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dewdney, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:05:05 compute-0 podman[94675]: 2025-12-09 16:05:05.587105149 +0000 UTC m=+0.118028224 container died 96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dewdney, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ddaf9ce1617d21895374ce9bd578a0ddeeaaad9267fa653fc4780e967eaf69-merged.mount: Deactivated successfully.
Dec 09 16:05:05 compute-0 podman[94675]: 2025-12-09 16:05:05.62586574 +0000 UTC m=+0.156788815 container remove 96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:05:05 compute-0 systemd[1]: libpod-conmon-96a116e8d42d5f4a7d61e3f19e5b0150bf98744306c5dde00356ed6f645044c8.scope: Deactivated successfully.
Dec 09 16:05:05 compute-0 ansible-async_wrapper.py[94711]: Invoked with j9958793940 30 /home/zuul/.ansible/tmp/ansible-tmp-1765296305.223745-36916-258900288088673/AnsiballZ_command.py _
Dec 09 16:05:05 compute-0 ansible-async_wrapper.py[94736]: Starting module and watcher
Dec 09 16:05:05 compute-0 systemd[1]: Reloading.
Dec 09 16:05:05 compute-0 ansible-async_wrapper.py[94736]: Start watching 94737 (30)
Dec 09 16:05:05 compute-0 ansible-async_wrapper.py[94737]: Start module (94737)
Dec 09 16:05:05 compute-0 ansible-async_wrapper.py[94711]: Return async_wrapper task started.
Dec 09 16:05:05 compute-0 sudo[94709]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:05 compute-0 systemd-rc-local-generator[94763]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:05:05 compute-0 systemd-sysv-generator[94767]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:05:05 compute-0 python3[94738]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:05 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:05 compute-0 podman[94772]: 2025-12-09 16:05:05.926845984 +0000 UTC m=+0.048080471 container create f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107 (image=quay.io/ceph/ceph:v20, name=jovial_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:05:05 compute-0 ceph-mon[75222]: Deploying daemon rgw.rgw.compute-0.efuxpz on compute-0
Dec 09 16:05:05 compute-0 systemd[1]: Started libpod-conmon-f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107.scope.
Dec 09 16:05:05 compute-0 podman[94772]: 2025-12-09 16:05:05.903180504 +0000 UTC m=+0.024415071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1507ccd3723627be63a4874cbcb79731c4c39876e16d826c1beb1afb70814b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1507ccd3723627be63a4874cbcb79731c4c39876e16d826c1beb1afb70814b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:06 compute-0 systemd[1]: Reloading.
Dec 09 16:05:06 compute-0 podman[94772]: 2025-12-09 16:05:06.021314019 +0000 UTC m=+0.142548556 container init f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107 (image=quay.io/ceph/ceph:v20, name=jovial_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 16:05:06 compute-0 podman[94772]: 2025-12-09 16:05:06.027966926 +0000 UTC m=+0.149201453 container start f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107 (image=quay.io/ceph/ceph:v20, name=jovial_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:05:06 compute-0 podman[94772]: 2025-12-09 16:05:06.031502521 +0000 UTC m=+0.152737048 container attach f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107 (image=quay.io/ceph/ceph:v20, name=jovial_hofstadter, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:06 compute-0 systemd-rc-local-generator[94823]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:05:06 compute-0 systemd-sysv-generator[94827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:05:06 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.efuxpz for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:05:06 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:05:06 compute-0 jovial_hofstadter[94789]: 
Dec 09 16:05:06 compute-0 jovial_hofstadter[94789]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 09 16:05:06 compute-0 systemd[1]: libpod-f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107.scope: Deactivated successfully.
Dec 09 16:05:06 compute-0 podman[94772]: 2025-12-09 16:05:06.484629405 +0000 UTC m=+0.605863902 container died f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107 (image=quay.io/ceph/ceph:v20, name=jovial_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc1507ccd3723627be63a4874cbcb79731c4c39876e16d826c1beb1afb70814b-merged.mount: Deactivated successfully.
Dec 09 16:05:06 compute-0 podman[94772]: 2025-12-09 16:05:06.534065371 +0000 UTC m=+0.655299858 container remove f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107 (image=quay.io/ceph/ceph:v20, name=jovial_hofstadter, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:06 compute-0 systemd[1]: libpod-conmon-f54f12ff809c4868db5948c920efc8ac553f99bd7822b60dc5ecefd50d02c107.scope: Deactivated successfully.
Dec 09 16:05:06 compute-0 podman[94900]: 2025-12-09 16:05:06.544880379 +0000 UTC m=+0.058467228 container create 7f00419a4644aef9dce1fa0301835835d4bb386233bc723500dd8ee22076ad4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-rgw-rgw-compute-0-efuxpz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:06 compute-0 ansible-async_wrapper.py[94737]: Module complete (94737)
Dec 09 16:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34790271d93ab40782c77266edb9237153122730b6bcd4d08ea44b0952275156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34790271d93ab40782c77266edb9237153122730b6bcd4d08ea44b0952275156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34790271d93ab40782c77266edb9237153122730b6bcd4d08ea44b0952275156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34790271d93ab40782c77266edb9237153122730b6bcd4d08ea44b0952275156/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.efuxpz supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:06 compute-0 podman[94900]: 2025-12-09 16:05:06.509007274 +0000 UTC m=+0.022594143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:06 compute-0 podman[94900]: 2025-12-09 16:05:06.609265583 +0000 UTC m=+0.122852472 container init 7f00419a4644aef9dce1fa0301835835d4bb386233bc723500dd8ee22076ad4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-rgw-rgw-compute-0-efuxpz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:06 compute-0 podman[94900]: 2025-12-09 16:05:06.613886416 +0000 UTC m=+0.127473295 container start 7f00419a4644aef9dce1fa0301835835d4bb386233bc723500dd8ee22076ad4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-rgw-rgw-compute-0-efuxpz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:05:06 compute-0 bash[94900]: 7f00419a4644aef9dce1fa0301835835d4bb386233bc723500dd8ee22076ad4d
Dec 09 16:05:06 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.efuxpz for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:05:06 compute-0 sudo[94486]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:06 compute-0 radosgw[94933]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:05:06 compute-0 radosgw[94933]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Dec 09 16:05:06 compute-0 radosgw[94933]: framework: beast
Dec 09 16:05:06 compute-0 radosgw[94933]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 09 16:05:06 compute-0 radosgw[94933]: init_numa not setting numa affinity
Dec 09 16:05:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 16:05:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev 2d88a8bc-6d32-4724-8966-3ba8d5270ffa (Updating rgw.rgw deployment (+1 -> 1))
Dec 09 16:05:06 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 2d88a8bc-6d32-4724-8966-3ba8d5270ffa (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Dec 09 16:05:06 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec 09 16:05:06 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 09 16:05:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 16:05:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 16:05:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev 08c35792-2ff4-45f3-8bf7-91006ca9fec2 (Updating mds.cephfs deployment (+1 -> 1))
Dec 09 16:05:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.izecis", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 09 16:05:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.izecis", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 09 16:05:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.izecis", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 09 16:05:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:05:06 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:06 compute-0 ceph-mgr[75515]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.izecis on compute-0
Dec 09 16:05:06 compute-0 ceph-mgr[75515]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.izecis on compute-0
Dec 09 16:05:06 compute-0 sudo[94985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:06 compute-0 sudo[94985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:06 compute-0 sudo[94985]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:06 compute-0 sudo[95038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzcohtiphpaexkhkzrzoqyfibdqobirb ; /usr/bin/python3'
Dec 09 16:05:06 compute-0 sudo[95038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:06 compute-0 sudo[95031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf
Dec 09 16:05:06 compute-0 sudo[95031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:06 compute-0 ceph-mon[75222]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.izecis", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.izecis", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 09 16:05:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:06 compute-0 python3[95055]: ansible-ansible.legacy.async_status Invoked with jid=j9958793940.94711 mode=status _async_dir=/root/.ansible_async
Dec 09 16:05:07 compute-0 sudo[95038]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:07 compute-0 sudo[95121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khcgntjlippjjuqnhvutghvbdbzzogai ; /usr/bin/python3'
Dec 09 16:05:07 compute-0 sudo[95121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:07 compute-0 python3[95125]: ansible-ansible.legacy.async_status Invoked with jid=j9958793940.94711 mode=cleanup _async_dir=/root/.ansible_async
Dec 09 16:05:07 compute-0 podman[95152]: 2025-12-09 16:05:07.284336026 +0000 UTC m=+0.042855912 container create 6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:07 compute-0 sudo[95121]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:07 compute-0 systemd[1]: Started libpod-conmon-6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0.scope.
Dec 09 16:05:07 compute-0 podman[95152]: 2025-12-09 16:05:07.265274469 +0000 UTC m=+0.023794395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:07 compute-0 podman[95152]: 2025-12-09 16:05:07.392781964 +0000 UTC m=+0.151301870 container init 6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:07 compute-0 podman[95152]: 2025-12-09 16:05:07.406488559 +0000 UTC m=+0.165008445 container start 6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_matsumoto, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:07 compute-0 podman[95152]: 2025-12-09 16:05:07.410046033 +0000 UTC m=+0.168565919 container attach 6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_matsumoto, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:05:07 compute-0 happy_matsumoto[95169]: 167 167
Dec 09 16:05:07 compute-0 systemd[1]: libpod-6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0.scope: Deactivated successfully.
Dec 09 16:05:07 compute-0 podman[95152]: 2025-12-09 16:05:07.416978228 +0000 UTC m=+0.175498154 container died 6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_matsumoto, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:05:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-73be8e369d3434f1744261b8f5d7067235323a5aa7a993b7d36940b6ca7b20d6-merged.mount: Deactivated successfully.
Dec 09 16:05:07 compute-0 podman[95152]: 2025-12-09 16:05:07.464349339 +0000 UTC m=+0.222869235 container remove 6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:05:07 compute-0 systemd[1]: libpod-conmon-6013eae97b94bd41fbac6cda9bf8e825c1d03e1668c145fc98f9c85fbf7dddf0.scope: Deactivated successfully.
Dec 09 16:05:07 compute-0 systemd[1]: Reloading.
Dec 09 16:05:07 compute-0 systemd-rc-local-generator[95214]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:05:07 compute-0 systemd-sysv-generator[95217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:05:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 09 16:05:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 09 16:05:07 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 09 16:05:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 09 16:05:07 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/520356323' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 09 16:05:07 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 38 pg[8.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:07 compute-0 sudo[95246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdeekoeloxezbsflhapuasrmlcxqjvlp ; /usr/bin/python3'
Dec 09 16:05:07 compute-0 sudo[95246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:07 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v80: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:07 compute-0 systemd[1]: Reloading.
Dec 09 16:05:07 compute-0 systemd-rc-local-generator[95281]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:05:07 compute-0 systemd-sysv-generator[95285]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:05:07 compute-0 python3[95250]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:07 compute-0 ceph-mon[75222]: Saving service rgw.rgw spec with placement compute-0
Dec 09 16:05:07 compute-0 ceph-mon[75222]: Deploying daemon mds.cephfs.compute-0.izecis on compute-0
Dec 09 16:05:07 compute-0 ceph-mon[75222]: osdmap e38: 3 total, 3 up, 3 in
Dec 09 16:05:07 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/520356323' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 09 16:05:08 compute-0 podman[95289]: 2025-12-09 16:05:08.056362631 +0000 UTC m=+0.049207291 container create fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f (image=quay.io/ceph/ceph:v20, name=wizardly_lederberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:08 compute-0 podman[95289]: 2025-12-09 16:05:08.035340982 +0000 UTC m=+0.028185662 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:08 compute-0 systemd[1]: Started libpod-conmon-fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f.scope.
Dec 09 16:05:08 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.izecis for 67f67f44-54fc-54ea-8df0-10931b6ecdaf...
Dec 09 16:05:08 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdac71275c45d1979ea3abb04bef3a53e780e5f913d4694d609d33d106fe33f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdac71275c45d1979ea3abb04bef3a53e780e5f913d4694d609d33d106fe33f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:08 compute-0 podman[95289]: 2025-12-09 16:05:08.217105901 +0000 UTC m=+0.209950571 container init fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f (image=quay.io/ceph/ceph:v20, name=wizardly_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:05:08 compute-0 podman[95289]: 2025-12-09 16:05:08.225689479 +0000 UTC m=+0.218534119 container start fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f (image=quay.io/ceph/ceph:v20, name=wizardly_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:05:08 compute-0 podman[95289]: 2025-12-09 16:05:08.228940956 +0000 UTC m=+0.221785616 container attach fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f (image=quay.io/ceph/ceph:v20, name=wizardly_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:08 compute-0 podman[95376]: 2025-12-09 16:05:08.394564856 +0000 UTC m=+0.038266780 container create 63e30a35b7b96951b132aa123d944f19fef8cbb23554d42058a7f839e76cf474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mds-cephfs-compute-0-izecis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42024003d38bb02feb1addc572909e5dfad393f2a600bf3d26bf6bef64b57da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42024003d38bb02feb1addc572909e5dfad393f2a600bf3d26bf6bef64b57da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42024003d38bb02feb1addc572909e5dfad393f2a600bf3d26bf6bef64b57da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42024003d38bb02feb1addc572909e5dfad393f2a600bf3d26bf6bef64b57da/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.izecis supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:08 compute-0 podman[95376]: 2025-12-09 16:05:08.455879018 +0000 UTC m=+0.099580962 container init 63e30a35b7b96951b132aa123d944f19fef8cbb23554d42058a7f839e76cf474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mds-cephfs-compute-0-izecis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:08 compute-0 podman[95376]: 2025-12-09 16:05:08.461491938 +0000 UTC m=+0.105193862 container start 63e30a35b7b96951b132aa123d944f19fef8cbb23554d42058a7f839e76cf474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mds-cephfs-compute-0-izecis, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:05:08 compute-0 bash[95376]: 63e30a35b7b96951b132aa123d944f19fef8cbb23554d42058a7f839e76cf474
Dec 09 16:05:08 compute-0 podman[95376]: 2025-12-09 16:05:08.376843944 +0000 UTC m=+0.020545888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:08 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.izecis for 67f67f44-54fc-54ea-8df0-10931b6ecdaf.
Dec 09 16:05:08 compute-0 sudo[95031]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:08 compute-0 ceph-mds[95396]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:05:08 compute-0 ceph-mds[95396]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec 09 16:05:08 compute-0 ceph-mds[95396]: main not setting numa affinity
Dec 09 16:05:08 compute-0 ceph-mds[95396]: pidfile_write: ignore empty --pid-file
Dec 09 16:05:08 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mds-cephfs-compute-0-izecis[95392]: starting mds.cephfs.compute-0.izecis at 
Dec 09 16:05:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:08 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis Updating MDS map to version 2 from mon.0
Dec 09 16:05:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 16:05:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:08 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev 08c35792-2ff4-45f3-8bf7-91006ca9fec2 (Updating mds.cephfs deployment (+1 -> 1))
Dec 09 16:05:08 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 08c35792-2ff4-45f3-8bf7-91006ca9fec2 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Dec 09 16:05:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 09 16:05:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 16:05:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:08 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14255 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:05:08 compute-0 wizardly_lederberg[95307]: 
Dec 09 16:05:08 compute-0 wizardly_lederberg[95307]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 09 16:05:08 compute-0 sudo[95415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:05:08 compute-0 sudo[95415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:08 compute-0 systemd[1]: libpod-fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f.scope: Deactivated successfully.
Dec 09 16:05:08 compute-0 sudo[95415]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:08 compute-0 podman[95289]: 2025-12-09 16:05:08.672266369 +0000 UTC m=+0.665111039 container died fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f (image=quay.io/ceph/ceph:v20, name=wizardly_lederberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:05:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 09 16:05:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdac71275c45d1979ea3abb04bef3a53e780e5f913d4694d609d33d106fe33f1-merged.mount: Deactivated successfully.
Dec 09 16:05:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/520356323' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 09 16:05:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 09 16:05:08 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 09 16:05:08 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 39 pg[8.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:08 compute-0 podman[95289]: 2025-12-09 16:05:08.72524226 +0000 UTC m=+0.718086900 container remove fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f (image=quay.io/ceph/ceph:v20, name=wizardly_lederberg, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:08 compute-0 systemd[1]: libpod-conmon-fb0314a4c25d51a461eb7173c763eb512bf583cabbbf507b141e3ffe4f1ffd1f.scope: Deactivated successfully.
Dec 09 16:05:08 compute-0 sudo[95246]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:08 compute-0 sudo[95443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:08 compute-0 sudo[95443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:08 compute-0 sudo[95443]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:08 compute-0 sudo[95995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:05:08 compute-0 sudo[95995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:09 compute-0 podman[96110]: 2025-12-09 16:05:09.282752682 +0000 UTC m=+0.066122361 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/305759122,v1:192.168.122.100:6815/305759122] as mds.0
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.izecis assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e3 new map
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-09T16:05:09:297800+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T16:04:56.239490+0000
                                           modified        2025-12-09T16:05:09.297782+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14257}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.izecis{0:14257} state up:creating seq 1 addr [v2:192.168.122.100:6814/305759122,v1:192.168.122.100:6815/305759122] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis Updating MDS map to version 3 from mon.0
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/305759122,v1:192.168.122.100:6815/305759122] up:boot
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.izecis=up:creating}
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.izecis"} v 0)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.izecis"} : dispatch
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e3 all = 0
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.3 handle_mds_map I am now mds.0.3
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x1
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x100
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x600
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x601
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x602
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x603
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x604
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x605
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x606
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x607
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x608
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.cache creating system inode with ino:0x609
Dec 09 16:05:09 compute-0 ceph-mds[95396]: mds.0.3 creating_done
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.izecis is now active in filesystem cephfs as rank 0
Dec 09 16:05:09 compute-0 podman[96110]: 2025-12-09 16:05:09.406885497 +0000 UTC m=+0.190255166 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:05:09 compute-0 sudo[96187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axbdthdchlxefbzssguqtnidzjgpacjm ; /usr/bin/python3'
Dec 09 16:05:09 compute-0 sudo[96187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: pgmap v80: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:09 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/520356323' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 09 16:05:09 compute-0 ceph-mon[75222]: osdmap e39: 3 total, 3 up, 3 in
Dec 09 16:05:09 compute-0 ceph-mon[75222]: daemon mds.cephfs.compute-0.izecis assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: Cluster is now healthy
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mds.? [v2:192.168.122.100:6814/305759122,v1:192.168.122.100:6815/305759122] up:boot
Dec 09 16:05:09 compute-0 ceph-mon[75222]: fsmap cephfs:1 {0=cephfs.compute-0.izecis=up:creating}
Dec 09 16:05:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.izecis"} : dispatch
Dec 09 16:05:09 compute-0 ceph-mon[75222]: daemon mds.cephfs.compute-0.izecis is now active in filesystem cephfs as rank 0
Dec 09 16:05:09 compute-0 python3[96196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:09 compute-0 podman[96226]: 2025-12-09 16:05:09.692532542 +0000 UTC m=+0.036669967 container create 6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874 (image=quay.io/ceph/ceph:v20, name=competent_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 09 16:05:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 09 16:05:09 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 09 16:05:09 compute-0 systemd[1]: Started libpod-conmon-6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874.scope.
Dec 09 16:05:09 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:09 compute-0 podman[96226]: 2025-12-09 16:05:09.67705613 +0000 UTC m=+0.021193575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f642e406264066691719cbabd1b20760c7de49fada56e9b43ecdefa9d9e0667/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f642e406264066691719cbabd1b20760c7de49fada56e9b43ecdefa9d9e0667/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:09 compute-0 podman[96226]: 2025-12-09 16:05:09.78820893 +0000 UTC m=+0.132346365 container init 6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874 (image=quay.io/ceph/ceph:v20, name=competent_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:05:09 compute-0 podman[96226]: 2025-12-09 16:05:09.798161805 +0000 UTC m=+0.142299230 container start 6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874 (image=quay.io/ceph/ceph:v20, name=competent_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:05:09 compute-0 podman[96226]: 2025-12-09 16:05:09.801433272 +0000 UTC m=+0.145570727 container attach 6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874 (image=quay.io/ceph/ceph:v20, name=competent_mayer, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:05:09 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v83: 9 pgs: 1 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 1.2 KiB/s wr, 2 op/s
Dec 09 16:05:10 compute-0 sudo[95995]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} v 0)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:05:10 compute-0 competent_mayer[96258]: 
Dec 09 16:05:10 compute-0 competent_mayer[96258]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Dec 09 16:05:10 compute-0 systemd[1]: libpod-6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874.scope: Deactivated successfully.
Dec 09 16:05:10 compute-0 podman[96226]: 2025-12-09 16:05:10.234653826 +0000 UTC m=+0.578791301 container died 6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874 (image=quay.io/ceph/ceph:v20, name=competent_mayer, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:05:10 compute-0 sudo[96364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f642e406264066691719cbabd1b20760c7de49fada56e9b43ecdefa9d9e0667-merged.mount: Deactivated successfully.
Dec 09 16:05:10 compute-0 sudo[96364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:10 compute-0 sudo[96364]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:10 compute-0 podman[96226]: 2025-12-09 16:05:10.292190838 +0000 UTC m=+0.636328263 container remove 6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874 (image=quay.io/ceph/ceph:v20, name=competent_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 09 16:05:10 compute-0 systemd[1]: libpod-conmon-6e12a2c3d7f3fdf148d26e786fc5313bf1ad2af9c57eed5e4c37e769f2a99874.scope: Deactivated successfully.
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e4 new map
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-09T16:05:10:301775+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T16:04:56.239490+0000
                                           modified        2025-12-09T16:05:10.301772+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14257}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14257 members: 14257
                                           [mds.cephfs.compute-0.izecis{0:14257} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/305759122,v1:192.168.122.100:6815/305759122] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 09 16:05:10 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis Updating MDS map to version 4 from mon.0
Dec 09 16:05:10 compute-0 ceph-mds[95396]: mds.0.3 handle_mds_map I am now mds.0.3
Dec 09 16:05:10 compute-0 ceph-mds[95396]: mds.0.3 handle_mds_map state change up:creating --> up:active
Dec 09 16:05:10 compute-0 ceph-mds[95396]: mds.0.3 recovery_done -- successful recovery!
Dec 09 16:05:10 compute-0 ceph-mds[95396]: mds.0.3 active_start
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/305759122,v1:192.168.122.100:6815/305759122] up:active
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.izecis=up:active}
Dec 09 16:05:10 compute-0 sudo[96187]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:10 compute-0 sudo[96402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:05:10 compute-0 sudo[96402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:10 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 40 pg[9.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='client.14255 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: osdmap e40: 3 total, 3 up, 3 in
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mds.? [v2:192.168.122.100:6814/305759122,v1:192.168.122.100:6815/305759122] up:active
Dec 09 16:05:10 compute-0 ceph-mon[75222]: fsmap cephfs:1 {0=cephfs.compute-0.izecis=up:active}
Dec 09 16:05:10 compute-0 podman[96443]: 2025-12-09 16:05:10.60739873 +0000 UTC m=+0.050234018 container create de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 09 16:05:10 compute-0 systemd[1]: Started libpod-conmon-de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11.scope.
Dec 09 16:05:10 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:10 compute-0 podman[96443]: 2025-12-09 16:05:10.582430256 +0000 UTC m=+0.025265594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:10 compute-0 ansible-async_wrapper.py[94736]: Done in kid B.
Dec 09 16:05:10 compute-0 podman[96443]: 2025-12-09 16:05:10.686695892 +0000 UTC m=+0.129531190 container init de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaplygin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:05:10 compute-0 podman[96443]: 2025-12-09 16:05:10.697661984 +0000 UTC m=+0.140497272 container start de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:05:10 compute-0 zealous_chaplygin[96459]: 167 167
Dec 09 16:05:10 compute-0 systemd[1]: libpod-de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11.scope: Deactivated successfully.
Dec 09 16:05:10 compute-0 podman[96443]: 2025-12-09 16:05:10.701833545 +0000 UTC m=+0.144668893 container attach de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaplygin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:10 compute-0 podman[96443]: 2025-12-09 16:05:10.702921814 +0000 UTC m=+0.145757112 container died de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaplygin, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 09 16:05:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 09 16:05:10 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 09 16:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-65d3ed37c386ac6bb187451986b68d83456ff42835df8b0fa232a3821468f0c0-merged.mount: Deactivated successfully.
Dec 09 16:05:10 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 41 pg[9.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:10 compute-0 podman[96443]: 2025-12-09 16:05:10.749257547 +0000 UTC m=+0.192092835 container remove de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaplygin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Dec 09 16:05:10 compute-0 systemd[1]: libpod-conmon-de597bb9a246fd4d816085c3db7d17b0cba2b32626a35e09779e27d6f5680d11.scope: Deactivated successfully.
Dec 09 16:05:10 compute-0 podman[96485]: 2025-12-09 16:05:10.954222894 +0000 UTC m=+0.061003595 container create 66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaum, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:05:10 compute-0 systemd[1]: Started libpod-conmon-66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69.scope.
Dec 09 16:05:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:11 compute-0 podman[96485]: 2025-12-09 16:05:10.928950322 +0000 UTC m=+0.035731113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54431d5928493ac5e2e239aecbc81e5da88042d7cd188643eee74b86c291d7d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54431d5928493ac5e2e239aecbc81e5da88042d7cd188643eee74b86c291d7d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54431d5928493ac5e2e239aecbc81e5da88042d7cd188643eee74b86c291d7d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54431d5928493ac5e2e239aecbc81e5da88042d7cd188643eee74b86c291d7d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54431d5928493ac5e2e239aecbc81e5da88042d7cd188643eee74b86c291d7d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:11 compute-0 podman[96485]: 2025-12-09 16:05:11.043857881 +0000 UTC m=+0.150638672 container init 66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaum, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:05:11 compute-0 podman[96485]: 2025-12-09 16:05:11.054473264 +0000 UTC m=+0.161253985 container start 66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:11 compute-0 podman[96485]: 2025-12-09 16:05:11.058197693 +0000 UTC m=+0.164978484 container attach 66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaum, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:11 compute-0 sudo[96529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdzqhhqxxnbzrslgwjxxadtcsevxoaoa ; /usr/bin/python3'
Dec 09 16:05:11 compute-0 sudo[96529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:11 compute-0 python3[96531]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:11 compute-0 podman[96534]: 2025-12-09 16:05:11.372302456 +0000 UTC m=+0.061292843 container create 4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087 (image=quay.io/ceph/ceph:v20, name=strange_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 09 16:05:11 compute-0 systemd[1]: Started libpod-conmon-4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087.scope.
Dec 09 16:05:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7586dea54d6179a7f60aa6489d984501b7d097e4ba4d8f1e0f59e47748a1acc6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7586dea54d6179a7f60aa6489d984501b7d097e4ba4d8f1e0f59e47748a1acc6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:11 compute-0 podman[96534]: 2025-12-09 16:05:11.353078234 +0000 UTC m=+0.042068611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:11 compute-0 podman[96534]: 2025-12-09 16:05:11.457628607 +0000 UTC m=+0.146618974 container init 4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087 (image=quay.io/ceph/ceph:v20, name=strange_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:05:11 compute-0 podman[96534]: 2025-12-09 16:05:11.466466613 +0000 UTC m=+0.155456990 container start 4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087 (image=quay.io/ceph/ceph:v20, name=strange_mclaren, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:05:11 compute-0 podman[96534]: 2025-12-09 16:05:11.470346426 +0000 UTC m=+0.159336783 container attach 4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087 (image=quay.io/ceph/ceph:v20, name=strange_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:11 compute-0 ceph-mgr[75515]: [progress INFO root] Writing back 5 completed events
Dec 09 16:05:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 16:05:11 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:11 compute-0 ceph-mon[75222]: pgmap v83: 9 pgs: 1 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 1.2 KiB/s wr, 2 op/s
Dec 09 16:05:11 compute-0 ceph-mon[75222]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:05:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 09 16:05:11 compute-0 ceph-mon[75222]: osdmap e41: 3 total, 3 up, 3 in
Dec 09 16:05:11 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:11 compute-0 sad_chaum[96501]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:05:11 compute-0 sad_chaum[96501]: --> All data devices are unavailable
Dec 09 16:05:11 compute-0 systemd[1]: libpod-66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69.scope: Deactivated successfully.
Dec 09 16:05:11 compute-0 podman[96485]: 2025-12-09 16:05:11.624257644 +0000 UTC m=+0.731038355 container died 66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaum, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-54431d5928493ac5e2e239aecbc81e5da88042d7cd188643eee74b86c291d7d9-merged.mount: Deactivated successfully.
Dec 09 16:05:11 compute-0 podman[96485]: 2025-12-09 16:05:11.673785722 +0000 UTC m=+0.780566413 container remove 66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaum, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:05:11 compute-0 systemd[1]: libpod-conmon-66122b4c1e10ab2ec4b6bdf3d5cbe276a75ffca4930b8cdd1e795a9224c81d69.scope: Deactivated successfully.
Dec 09 16:05:11 compute-0 sudo[96402]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 09 16:05:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 09 16:05:11 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 09 16:05:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 09 16:05:11 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 09 16:05:11 compute-0 sudo[96596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:11 compute-0 sudo[96596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:11 compute-0 sudo[96596]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:11 compute-0 sudo[96621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:05:11 compute-0 sudo[96621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:11 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v86: 10 pgs: 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 09 16:05:11 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:05:11 compute-0 strange_mclaren[96554]: 
Dec 09 16:05:11 compute-0 strange_mclaren[96554]: [{"container_id": "1d6c84974beb", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.21%", "created": "2025-12-09T16:03:48.235513Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-09T16:03:48.306395Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-09T16:05:10.151532Z", "memory_usage": 7795113, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-09T16:03:48.139593Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@crash.compute-0", "version": "20.2.0"}, {"container_id": "63e30a35b7b9", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "8.88%", "created": "2025-12-09T16:05:08.481850Z", "daemon_id": "cephfs.compute-0.izecis", "daemon_name": "mds.cephfs.compute-0.izecis", "daemon_type": "mds", "events": ["2025-12-09T16:05:08.550666Z daemon:mds.cephfs.compute-0.izecis [INFO] \"Deployed mds.cephfs.compute-0.izecis on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-09T16:05:10.151995Z", "memory_usage": 15665725, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-09T16:05:08.381143Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@mds.cephfs.compute-0.izecis", "version": "20.2.0"}, {"container_id": "f232def5bd3d", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "18.41%", "created": "2025-12-09T16:03:06.111183Z", "daemon_id": "compute-0.ysegzv", "daemon_name": "mgr.compute-0.ysegzv", "daemon_type": "mgr", "events": ["2025-12-09T16:03:52.882112Z daemon:mgr.compute-0.ysegzv [INFO] \"Reconfigured mgr.compute-0.ysegzv on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-09T16:05:10.151460Z", "memory_usage": 546098380, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-09T16:03:06.005837Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@mgr.compute-0.ysegzv", "version": "20.2.0"}, {"container_id": "9ce3cdfc68db", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.82%", "created": "2025-12-09T16:03:01.698057Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-09T16:03:52.206252Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-09T16:05:10.151363Z", "memory_request": 2147483648, "memory_usage": 39741030, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-09T16:03:04.055403Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@mon.compute-0", "version": "20.2.0"}, {"container_id": "012822ae8bed", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.62%", "created": "2025-12-09T16:04:12.845317Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-09T16:04:12.916010Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-09T16:05:10.151602Z", "memory_request": 4294967296, "memory_usage": 58804142, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-09T16:04:12.701287Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@osd.0", "version": "20.2.0"}, {"container_id": "187eb50611d2", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.90%", "created": "2025-12-09T16:04:16.873449Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-09T16:04:16.972934Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-09T16:05:10.151689Z", "memory_request": 4294967296, "memory_usage": 59108229, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-09T16:04:16.720170Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@osd.1", "version": "20.2.0"}, {"container_id": "2c04f740a8b2", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.81%", "created": "2025-12-09T16:04:21.421597Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-09T16:04:21.551168Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-09T16:05:10.151793Z", "memory_request": 4294967296, "memory_usage": 56853790, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-09T16:04:21.299871Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@osd.2", "version": "20.2.0"}, {"container_id": "7f00419a4644", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "5.28%", "created": "2025-12-09T16:05:06.629801Z", "daemon_id": "rgw.compute-0.efuxpz", "daemon_name": "rgw.rgw.compute-0.efuxpz", "daemon_type": "rgw", "events": ["2025-12-09T16:05:06.725213Z daemon:rgw.rgw.compute-0.efuxpz [INFO] \"Deployed rgw.rgw.compute-0.efuxpz on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-12-09T16:05:10.151890Z", "memory_usage": 54274293, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-12-09T16:05:06.513164Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf@rgw.rgw.compute-0.efuxpz", "version": "20.2.0"}]
Dec 09 16:05:11 compute-0 podman[96534]: 2025-12-09 16:05:11.927164889 +0000 UTC m=+0.616155236 container died 4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087 (image=quay.io/ceph/ceph:v20, name=strange_mclaren, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Dec 09 16:05:11 compute-0 systemd[1]: libpod-4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087.scope: Deactivated successfully.
Dec 09 16:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7586dea54d6179a7f60aa6489d984501b7d097e4ba4d8f1e0f59e47748a1acc6-merged.mount: Deactivated successfully.
Dec 09 16:05:11 compute-0 podman[96534]: 2025-12-09 16:05:11.977002806 +0000 UTC m=+0.665993183 container remove 4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087 (image=quay.io/ceph/ceph:v20, name=strange_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:05:11 compute-0 systemd[1]: libpod-conmon-4b061edcffe658d1563f699c3f074c17080c5a4121afd2cde85d4e22631e6087.scope: Deactivated successfully.
Dec 09 16:05:11 compute-0 sudo[96529]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 42 pg[10.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [2] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:12 compute-0 podman[96673]: 2025-12-09 16:05:12.124289867 +0000 UTC m=+0.043493869 container create 57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_meninsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:12 compute-0 rsyslogd[1004]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "1d6c84974beb", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 16:05:12 compute-0 systemd[1]: Started libpod-conmon-57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b.scope.
Dec 09 16:05:12 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:12 compute-0 podman[96673]: 2025-12-09 16:05:12.193112939 +0000 UTC m=+0.112316971 container init 57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:12 compute-0 podman[96673]: 2025-12-09 16:05:12.200008003 +0000 UTC m=+0.119212005 container start 57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 09 16:05:12 compute-0 podman[96673]: 2025-12-09 16:05:12.104036308 +0000 UTC m=+0.023240280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:12 compute-0 podman[96673]: 2025-12-09 16:05:12.205002946 +0000 UTC m=+0.124206948 container attach 57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_meninsky, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:12 compute-0 compassionate_meninsky[96689]: 167 167
Dec 09 16:05:12 compute-0 systemd[1]: libpod-57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b.scope: Deactivated successfully.
Dec 09 16:05:12 compute-0 conmon[96689]: conmon 57090c58c9aca318dcdf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b.scope/container/memory.events
Dec 09 16:05:12 compute-0 podman[96673]: 2025-12-09 16:05:12.209084775 +0000 UTC m=+0.128288777 container died 57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_meninsky, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0de9babd20784a854bc71bb85d1e92064e5924aa3fe19e0ec251f6b8ff11258b-merged.mount: Deactivated successfully.
Dec 09 16:05:12 compute-0 podman[96673]: 2025-12-09 16:05:12.271588969 +0000 UTC m=+0.190792971 container remove 57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:05:12 compute-0 systemd[1]: libpod-conmon-57090c58c9aca318dcdf2e3c555125ca23dd1f1009f8d261aee85f25d7c0642b.scope: Deactivated successfully.
Dec 09 16:05:12 compute-0 podman[96713]: 2025-12-09 16:05:12.423945125 +0000 UTC m=+0.048017679 container create d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:12 compute-0 systemd[1]: Started libpod-conmon-d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc.scope.
Dec 09 16:05:12 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36e6bf1d3fdad5f8aef4dcf51fdee26c836f9b5102186b6ef035ad685827dac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:12 compute-0 podman[96713]: 2025-12-09 16:05:12.398079416 +0000 UTC m=+0.022151990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36e6bf1d3fdad5f8aef4dcf51fdee26c836f9b5102186b6ef035ad685827dac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36e6bf1d3fdad5f8aef4dcf51fdee26c836f9b5102186b6ef035ad685827dac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36e6bf1d3fdad5f8aef4dcf51fdee26c836f9b5102186b6ef035ad685827dac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:12 compute-0 podman[96713]: 2025-12-09 16:05:12.510702805 +0000 UTC m=+0.134775429 container init d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:12 compute-0 podman[96713]: 2025-12-09 16:05:12.524080181 +0000 UTC m=+0.148152775 container start d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:12 compute-0 podman[96713]: 2025-12-09 16:05:12.527648166 +0000 UTC m=+0.151720760 container attach d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_raman, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:05:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 09 16:05:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 09 16:05:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 09 16:05:12 compute-0 ceph-mon[75222]: osdmap e42: 3 total, 3 up, 3 in
Dec 09 16:05:12 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 09 16:05:12 compute-0 ceph-mon[75222]: pgmap v86: 10 pgs: 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 09 16:05:12 compute-0 ceph-mon[75222]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 16:05:12 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 09 16:05:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 43 pg[10.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [2] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:12 compute-0 dazzling_raman[96729]: {
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:     "0": [
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:         {
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "devices": [
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "/dev/loop3"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             ],
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_name": "ceph_lv0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_size": "21470642176",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "name": "ceph_lv0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "tags": {
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.crush_device_class": "",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.encrypted": "0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osd_id": "0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.type": "block",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.vdo": "0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.with_tpm": "0"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             },
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "type": "block",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "vg_name": "ceph_vg0"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:         }
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:     ],
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:     "1": [
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:         {
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "devices": [
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "/dev/loop4"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             ],
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_name": "ceph_lv1",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_size": "21470642176",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "name": "ceph_lv1",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "tags": {
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.crush_device_class": "",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.encrypted": "0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osd_id": "1",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.type": "block",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.vdo": "0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.with_tpm": "0"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             },
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "type": "block",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "vg_name": "ceph_vg1"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:         }
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:     ],
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:     "2": [
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:         {
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "devices": [
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "/dev/loop5"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             ],
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_name": "ceph_lv2",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_size": "21470642176",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "name": "ceph_lv2",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "tags": {
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.crush_device_class": "",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.encrypted": "0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osd_id": "2",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.type": "block",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.vdo": "0",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:                 "ceph.with_tpm": "0"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             },
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "type": "block",
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:             "vg_name": "ceph_vg2"
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:         }
Dec 09 16:05:12 compute-0 dazzling_raman[96729]:     ]
Dec 09 16:05:12 compute-0 dazzling_raman[96729]: }
Dec 09 16:05:12 compute-0 sudo[96763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aolsdnxvedniamqxycoldavajnewyboi ; /usr/bin/python3'
Dec 09 16:05:12 compute-0 sudo[96763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:12 compute-0 systemd[1]: libpod-d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc.scope: Deactivated successfully.
Dec 09 16:05:12 compute-0 podman[96713]: 2025-12-09 16:05:12.864941125 +0000 UTC m=+0.489013689 container died d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_raman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 09 16:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b36e6bf1d3fdad5f8aef4dcf51fdee26c836f9b5102186b6ef035ad685827dac-merged.mount: Deactivated successfully.
Dec 09 16:05:12 compute-0 podman[96713]: 2025-12-09 16:05:12.914432633 +0000 UTC m=+0.538505197 container remove d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_raman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:12 compute-0 systemd[1]: libpod-conmon-d8d697efeaa7592b7fdf1acd57a71357a460d89e786d79feefd72d277145ffcc.scope: Deactivated successfully.
Dec 09 16:05:12 compute-0 sudo[96621]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:13 compute-0 python3[96765]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:13 compute-0 sudo[96777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:13 compute-0 sudo[96777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:13 compute-0 sudo[96777]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:13 compute-0 podman[96800]: 2025-12-09 16:05:13.074434663 +0000 UTC m=+0.050142246 container create 4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e (image=quay.io/ceph/ceph:v20, name=admiring_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:05:13 compute-0 systemd[1]: Started libpod-conmon-4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e.scope.
Dec 09 16:05:13 compute-0 sudo[96811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:05:13 compute-0 sudo[96811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:13 compute-0 podman[96800]: 2025-12-09 16:05:13.046996093 +0000 UTC m=+0.022703746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46de61c3819175f040bbc3fdd36d58f496b93f26f9ce82fa4a90e8b99118930/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46de61c3819175f040bbc3fdd36d58f496b93f26f9ce82fa4a90e8b99118930/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:13 compute-0 podman[96800]: 2025-12-09 16:05:13.155849721 +0000 UTC m=+0.131557344 container init 4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e (image=quay.io/ceph/ceph:v20, name=admiring_kare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:13 compute-0 podman[96800]: 2025-12-09 16:05:13.165769875 +0000 UTC m=+0.141477468 container start 4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e (image=quay.io/ceph/ceph:v20, name=admiring_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:05:13 compute-0 podman[96800]: 2025-12-09 16:05:13.169663549 +0000 UTC m=+0.145371172 container attach 4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e (image=quay.io/ceph/ceph:v20, name=admiring_kare, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:05:13 compute-0 podman[96879]: 2025-12-09 16:05:13.383973964 +0000 UTC m=+0.041527336 container create b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:13 compute-0 systemd[1]: Started libpod-conmon-b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd.scope.
Dec 09 16:05:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:13 compute-0 podman[96879]: 2025-12-09 16:05:13.455020676 +0000 UTC m=+0.112574038 container init b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:05:13 compute-0 podman[96879]: 2025-12-09 16:05:13.363688234 +0000 UTC m=+0.021241606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:13 compute-0 podman[96879]: 2025-12-09 16:05:13.460494702 +0000 UTC m=+0.118048034 container start b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:05:13 compute-0 podman[96879]: 2025-12-09 16:05:13.463675976 +0000 UTC m=+0.121229318 container attach b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:05:13 compute-0 bold_perlman[96894]: 167 167
Dec 09 16:05:13 compute-0 systemd[1]: libpod-b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd.scope: Deactivated successfully.
Dec 09 16:05:13 compute-0 podman[96879]: 2025-12-09 16:05:13.47544581 +0000 UTC m=+0.132999152 container died b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c065052c5bc9bf7345b96153e85c9a4af1f55967161b499eb8e308b4525c9dc-merged.mount: Deactivated successfully.
Dec 09 16:05:13 compute-0 podman[96879]: 2025-12-09 16:05:13.51449813 +0000 UTC m=+0.172051482 container remove b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 09 16:05:13 compute-0 systemd[1]: libpod-conmon-b34d3ccf6ba18b052bdc937a6a1f8e5332871038daf4bb1b17b8d83f829650fd.scope: Deactivated successfully.
Dec 09 16:05:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 09 16:05:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700356960' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 09 16:05:13 compute-0 admiring_kare[96842]: 
Dec 09 16:05:13 compute-0 admiring_kare[96842]: {"fsid":"67f67f44-54fc-54ea-8df0-10931b6ecdaf","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":129,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":43,"num_osds":3,"num_up_osds":3,"osd_up_since":1765296267,"num_in_osds":3,"osd_in_since":1765296245,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":8},{"state_name":"unknown","count":2}],"num_pgs":10,"num_pools":10,"num_objects":29,"data_bytes":463390,"bytes_used":84045824,"bytes_avail":64327880704,"bytes_total":64411926528,"unknown_pgs_ratio":0.20000000298023224,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":4,"btime":"2025-12-09T16:05:10:301775+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.izecis","status":"up:active","gid":14257}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-09T16:04:27.868520+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 09 16:05:13 compute-0 systemd[1]: libpod-4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e.scope: Deactivated successfully.
Dec 09 16:05:13 compute-0 podman[96800]: 2025-12-09 16:05:13.702808993 +0000 UTC m=+0.678516556 container died 4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e (image=quay.io/ceph/ceph:v20, name=admiring_kare, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:05:13 compute-0 podman[96918]: 2025-12-09 16:05:13.718756678 +0000 UTC m=+0.057230095 container create 7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c46de61c3819175f040bbc3fdd36d58f496b93f26f9ce82fa4a90e8b99118930-merged.mount: Deactivated successfully.
Dec 09 16:05:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 09 16:05:13 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 09 16:05:13 compute-0 ceph-mon[75222]: osdmap e43: 3 total, 3 up, 3 in
Dec 09 16:05:13 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3700356960' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 09 16:05:13 compute-0 podman[96800]: 2025-12-09 16:05:13.756061301 +0000 UTC m=+0.731768854 container remove 4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e (image=quay.io/ceph/ceph:v20, name=admiring_kare, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 09 16:05:13 compute-0 systemd[1]: Started libpod-conmon-7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503.scope.
Dec 09 16:05:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 09 16:05:13 compute-0 systemd[1]: libpod-conmon-4b5a702ad6781022dd70bed8387c19197c5e3162ae4ce755e748c6b98ce2844e.scope: Deactivated successfully.
Dec 09 16:05:13 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 09 16:05:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 09 16:05:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 09 16:05:13 compute-0 sudo[96763]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:13 compute-0 podman[96918]: 2025-12-09 16:05:13.690264549 +0000 UTC m=+0.028737956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f27c4db55267e56556edcff1394aef4f3a4b6f7bc8a25ab7e9749382f28597/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f27c4db55267e56556edcff1394aef4f3a4b6f7bc8a25ab7e9749382f28597/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f27c4db55267e56556edcff1394aef4f3a4b6f7bc8a25ab7e9749382f28597/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f27c4db55267e56556edcff1394aef4f3a4b6f7bc8a25ab7e9749382f28597/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:13 compute-0 podman[96918]: 2025-12-09 16:05:13.813563082 +0000 UTC m=+0.152036489 container init 7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:05:13 compute-0 podman[96918]: 2025-12-09 16:05:13.821933965 +0000 UTC m=+0.160407352 container start 7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:05:13 compute-0 podman[96918]: 2025-12-09 16:05:13.824497943 +0000 UTC m=+0.162971330 container attach 7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:13 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 2 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Dec 09 16:05:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:14 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mds-cephfs-compute-0-izecis[95392]: 2025-12-09T16:05:14.316+0000 7fd8aa2fc640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 09 16:05:14 compute-0 ceph-mds[95396]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 09 16:05:14 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 44 pg[11.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:14 compute-0 lvm[97026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:05:14 compute-0 lvm[97029]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:05:14 compute-0 lvm[97026]: VG ceph_vg0 finished
Dec 09 16:05:14 compute-0 lvm[97029]: VG ceph_vg1 finished
Dec 09 16:05:14 compute-0 lvm[97031]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:05:14 compute-0 lvm[97031]: VG ceph_vg2 finished
Dec 09 16:05:14 compute-0 sudo[97056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szqhjhldqismvftclwkijrraqfltktdn ; /usr/bin/python3'
Dec 09 16:05:14 compute-0 sudo[97056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:14 compute-0 objective_faraday[96950]: {}
Dec 09 16:05:14 compute-0 systemd[1]: libpod-7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503.scope: Deactivated successfully.
Dec 09 16:05:14 compute-0 systemd[1]: libpod-7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503.scope: Consumed 1.354s CPU time.
Dec 09 16:05:14 compute-0 podman[96918]: 2025-12-09 16:05:14.661003465 +0000 UTC m=+0.999476872 container died 7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9f27c4db55267e56556edcff1394aef4f3a4b6f7bc8a25ab7e9749382f28597-merged.mount: Deactivated successfully.
Dec 09 16:05:14 compute-0 podman[96918]: 2025-12-09 16:05:14.708028427 +0000 UTC m=+1.046501814 container remove 7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:05:14 compute-0 systemd[1]: libpod-conmon-7c7b686148ee0112eb1fe7aa6c362b9c1f5f2815db8be916aa3b228ee7d02503.scope: Deactivated successfully.
Dec 09 16:05:14 compute-0 sudo[96811]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 09 16:05:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 09 16:05:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 09 16:05:14 compute-0 python3[97058]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:14 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 09 16:05:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 09 16:05:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 09 16:05:14 compute-0 ceph-mon[75222]: osdmap e44: 3 total, 3 up, 3 in
Dec 09 16:05:14 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 09 16:05:14 compute-0 ceph-mon[75222]: pgmap v89: 11 pgs: 2 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Dec 09 16:05:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:14 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 45 pg[11.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:14 compute-0 sudo[97073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:05:14 compute-0 sudo[97073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:14 compute-0 podman[97076]: 2025-12-09 16:05:14.822923236 +0000 UTC m=+0.039613326 container create e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5 (image=quay.io/ceph/ceph:v20, name=youthful_nash, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:14 compute-0 sudo[97073]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:14 compute-0 systemd[1]: Started libpod-conmon-e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5.scope.
Dec 09 16:05:14 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c4b28dc761ef087e57271f808626d6bf85ca2b4f4938c51179c7cba28b803d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c4b28dc761ef087e57271f808626d6bf85ca2b4f4938c51179c7cba28b803d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:14 compute-0 podman[97076]: 2025-12-09 16:05:14.806370065 +0000 UTC m=+0.023060155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:14 compute-0 sudo[97114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:14 compute-0 sudo[97114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:14 compute-0 podman[97076]: 2025-12-09 16:05:14.914753941 +0000 UTC m=+0.131444051 container init e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5 (image=quay.io/ceph/ceph:v20, name=youthful_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:05:14 compute-0 sudo[97114]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:14 compute-0 podman[97076]: 2025-12-09 16:05:14.92337784 +0000 UTC m=+0.140067930 container start e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5 (image=quay.io/ceph/ceph:v20, name=youthful_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 09 16:05:14 compute-0 podman[97076]: 2025-12-09 16:05:14.926906374 +0000 UTC m=+0.143596464 container attach e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5 (image=quay.io/ceph/ceph:v20, name=youthful_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 16:05:14 compute-0 sudo[97143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:05:14 compute-0 sudo[97143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 16:05:15 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/556659416' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:05:15 compute-0 youthful_nash[97118]: 
Dec 09 16:05:15 compute-0 systemd[1]: libpod-e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5.scope: Deactivated successfully.
Dec 09 16:05:15 compute-0 youthful_nash[97118]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.efuxpz","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 09 16:05:15 compute-0 podman[97076]: 2025-12-09 16:05:15.358118545 +0000 UTC m=+0.574808655 container died e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5 (image=quay.io/ceph/ceph:v20, name=youthful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:05:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0c4b28dc761ef087e57271f808626d6bf85ca2b4f4938c51179c7cba28b803d-merged.mount: Deactivated successfully.
Dec 09 16:05:15 compute-0 podman[97076]: 2025-12-09 16:05:15.399016384 +0000 UTC m=+0.615706474 container remove e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5 (image=quay.io/ceph/ceph:v20, name=youthful_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:15 compute-0 podman[97232]: 2025-12-09 16:05:15.402201869 +0000 UTC m=+0.070704974 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:05:15 compute-0 systemd[1]: libpod-conmon-e1a2b18e9f2a41dadbcc2a87573bfd628968646d947fb544ba04784b2d6b15b5.scope: Deactivated successfully.
Dec 09 16:05:15 compute-0 sudo[97056]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:15 compute-0 podman[97232]: 2025-12-09 16:05:15.497227679 +0000 UTC m=+0.165730784 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 09 16:05:15 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 09 16:05:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 09 16:05:15 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 09 16:05:15 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 09 16:05:15 compute-0 ceph-mon[75222]: osdmap e45: 3 total, 3 up, 3 in
Dec 09 16:05:15 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 09 16:05:15 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:15 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:15 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/556659416' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 09 16:05:15 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/620534407' entity='client.rgw.rgw.compute-0.efuxpz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 09 16:05:15 compute-0 ceph-mon[75222]: osdmap e46: 3 total, 3 up, 3 in
Dec 09 16:05:15 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec 09 16:05:15 compute-0 radosgw[94933]: v1 topic migration: starting v1 topic migration..
Dec 09 16:05:15 compute-0 radosgw[94933]: v1 topic migration: finished v1 topic migration
Dec 09 16:05:16 compute-0 radosgw[94933]: framework: beast
Dec 09 16:05:16 compute-0 radosgw[94933]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 09 16:05:16 compute-0 radosgw[94933]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 09 16:05:16 compute-0 radosgw[94933]: starting handler: beast
Dec 09 16:05:16 compute-0 radosgw[94933]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 16:05:16 compute-0 radosgw[94933]: mgrc service_daemon_register rgw.14260 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.efuxpz,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=483fcdb3-bcab-4288-b81d-feaf7f34b01d,zone_name=default,zonegroup_id=867d5d1c-b402-423a-949d-5103c0b25b35,zonegroup_name=default}
Dec 09 16:05:16 compute-0 sudo[97143]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:16 compute-0 sudo[97491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxepzsxwietukubcetdxdekcgksabeyz ; /usr/bin/python3'
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:16 compute-0 sudo[97491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:16 compute-0 sudo[97494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:16 compute-0 sudo[97494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:16 compute-0 sudo[97494]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:16 compute-0 python3[97493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:16 compute-0 sudo[97519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:05:16 compute-0 sudo[97519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:16 compute-0 podman[97540]: 2025-12-09 16:05:16.47684434 +0000 UTC m=+0.035964569 container create c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4 (image=quay.io/ceph/ceph:v20, name=elastic_williamson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:05:16 compute-0 systemd[1]: Started libpod-conmon-c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4.scope.
Dec 09 16:05:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4c66472e79d764271cf759c1e0d21fe8870ee771619a2dec5d5086870a4cb6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4c66472e79d764271cf759c1e0d21fe8870ee771619a2dec5d5086870a4cb6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:16 compute-0 podman[97540]: 2025-12-09 16:05:16.543557676 +0000 UTC m=+0.102677985 container init c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4 (image=quay.io/ceph/ceph:v20, name=elastic_williamson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:16 compute-0 podman[97540]: 2025-12-09 16:05:16.549509064 +0000 UTC m=+0.108629293 container start c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4 (image=quay.io/ceph/ceph:v20, name=elastic_williamson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:05:16 compute-0 podman[97540]: 2025-12-09 16:05:16.460809813 +0000 UTC m=+0.019930082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:16 compute-0 podman[97540]: 2025-12-09 16:05:16.561784511 +0000 UTC m=+0.120904830 container attach c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4 (image=quay.io/ceph/ceph:v20, name=elastic_williamson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:16 compute-0 podman[97595]: 2025-12-09 16:05:16.762388212 +0000 UTC m=+0.059250278 container create ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:05:16 compute-0 systemd[1]: Started libpod-conmon-ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31.scope.
Dec 09 16:05:16 compute-0 podman[97595]: 2025-12-09 16:05:16.73150983 +0000 UTC m=+0.028371986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:16 compute-0 podman[97595]: 2025-12-09 16:05:16.853273352 +0000 UTC m=+0.150135448 container init ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:16 compute-0 podman[97595]: 2025-12-09 16:05:16.861692996 +0000 UTC m=+0.158555082 container start ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_swartz, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:16 compute-0 hopeful_swartz[97611]: 167 167
Dec 09 16:05:16 compute-0 podman[97595]: 2025-12-09 16:05:16.866520315 +0000 UTC m=+0.163382461 container attach ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:05:16 compute-0 systemd[1]: libpod-ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31.scope: Deactivated successfully.
Dec 09 16:05:16 compute-0 podman[97595]: 2025-12-09 16:05:16.877316442 +0000 UTC m=+0.174178538 container died ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf8d023f0da1cd5db69720e106ca674064425a20523e9e34e4d806e7536f0649-merged.mount: Deactivated successfully.
Dec 09 16:05:16 compute-0 podman[97595]: 2025-12-09 16:05:16.928522345 +0000 UTC m=+0.225384431 container remove ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_swartz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:05:16 compute-0 systemd[1]: libpod-conmon-ef7d77a6cab96e21c6a061931e339e10633318c73c885d4ef7f0637ca8fb7c31.scope: Deactivated successfully.
Dec 09 16:05:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 09 16:05:16 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3202198268' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 09 16:05:16 compute-0 elastic_williamson[97560]: mimic
Dec 09 16:05:17 compute-0 systemd[1]: libpod-c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4.scope: Deactivated successfully.
Dec 09 16:05:17 compute-0 podman[97540]: 2025-12-09 16:05:17.004897009 +0000 UTC m=+0.564017278 container died c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4 (image=quay.io/ceph/ceph:v20, name=elastic_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:05:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac4c66472e79d764271cf759c1e0d21fe8870ee771619a2dec5d5086870a4cb6-merged.mount: Deactivated successfully.
Dec 09 16:05:17 compute-0 podman[97540]: 2025-12-09 16:05:17.059672747 +0000 UTC m=+0.618792986 container remove c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4 (image=quay.io/ceph/ceph:v20, name=elastic_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:17 compute-0 systemd[1]: libpod-conmon-c7760e2eb488e596eeac408cde78859db22912ae4f61ed0c7a894b39cc38a7c4.scope: Deactivated successfully.
Dec 09 16:05:17 compute-0 sudo[97491]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:17 compute-0 podman[97649]: 2025-12-09 16:05:17.124544864 +0000 UTC m=+0.043174260 container create f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gagarin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:05:17 compute-0 systemd[1]: Started libpod-conmon-f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870.scope.
Dec 09 16:05:17 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1b98029010471df0aa8d1159bcb9d94046b27bd1fa166300c375f82c79565c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1b98029010471df0aa8d1159bcb9d94046b27bd1fa166300c375f82c79565c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1b98029010471df0aa8d1159bcb9d94046b27bd1fa166300c375f82c79565c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1b98029010471df0aa8d1159bcb9d94046b27bd1fa166300c375f82c79565c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1b98029010471df0aa8d1159bcb9d94046b27bd1fa166300c375f82c79565c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:17 compute-0 podman[97649]: 2025-12-09 16:05:17.191323412 +0000 UTC m=+0.109952848 container init f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gagarin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 16:05:17 compute-0 podman[97649]: 2025-12-09 16:05:17.196944482 +0000 UTC m=+0.115573868 container start f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 16:05:17 compute-0 podman[97649]: 2025-12-09 16:05:17.199936032 +0000 UTC m=+0.118565418 container attach f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gagarin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:05:17 compute-0 podman[97649]: 2025-12-09 16:05:17.10523463 +0000 UTC m=+0.023864066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:17 compute-0 ceph-mon[75222]: pgmap v92: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:05:17 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3202198268' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 09 16:05:17 compute-0 boring_gagarin[97665]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:05:17 compute-0 boring_gagarin[97665]: --> All data devices are unavailable
Dec 09 16:05:17 compute-0 systemd[1]: libpod-f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870.scope: Deactivated successfully.
Dec 09 16:05:17 compute-0 podman[97649]: 2025-12-09 16:05:17.733334283 +0000 UTC m=+0.651963679 container died f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gagarin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:05:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb1b98029010471df0aa8d1159bcb9d94046b27bd1fa166300c375f82c79565c-merged.mount: Deactivated successfully.
Dec 09 16:05:17 compute-0 podman[97649]: 2025-12-09 16:05:17.798744375 +0000 UTC m=+0.717373801 container remove f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:05:17 compute-0 systemd[1]: libpod-conmon-f9cbd062386c27d0bf351070a6dadf4c4ae0e6c31fde80d593acb7038721e870.scope: Deactivated successfully.
Dec 09 16:05:17 compute-0 sudo[97519]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:17 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 199 B/s rd, 398 B/s wr, 1 op/s
Dec 09 16:05:17 compute-0 sudo[97746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibllzkgmsduvkdkaogbazsigeswsivzs ; /usr/bin/python3'
Dec 09 16:05:17 compute-0 sudo[97700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:17 compute-0 sudo[97746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:05:17 compute-0 sudo[97700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:17 compute-0 sudo[97700]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:18 compute-0 sudo[97751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:05:18 compute-0 sudo[97751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:18 compute-0 python3[97749]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:05:18 compute-0 podman[97776]: 2025-12-09 16:05:18.203430739 +0000 UTC m=+0.061365595 container create e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4 (image=quay.io/ceph/ceph:v20, name=zen_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 16:05:18 compute-0 systemd[1]: Started libpod-conmon-e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4.scope.
Dec 09 16:05:18 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33530d83b93dd27f170b4eeba564628717f40a45199f19d3c8f9a297e3f42e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33530d83b93dd27f170b4eeba564628717f40a45199f19d3c8f9a297e3f42e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:18 compute-0 podman[97776]: 2025-12-09 16:05:18.184988358 +0000 UTC m=+0.042923244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:05:18 compute-0 podman[97799]: 2025-12-09 16:05:18.279051163 +0000 UTC m=+0.044180958 container create c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:05:18 compute-0 podman[97776]: 2025-12-09 16:05:18.288027602 +0000 UTC m=+0.145962508 container init e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4 (image=quay.io/ceph/ceph:v20, name=zen_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:05:18 compute-0 podman[97776]: 2025-12-09 16:05:18.294920305 +0000 UTC m=+0.152855181 container start e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4 (image=quay.io/ceph/ceph:v20, name=zen_carver, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:05:18 compute-0 podman[97776]: 2025-12-09 16:05:18.303810112 +0000 UTC m=+0.161744978 container attach e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4 (image=quay.io/ceph/ceph:v20, name=zen_carver, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:05:18 compute-0 systemd[1]: Started libpod-conmon-c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea.scope.
Dec 09 16:05:18 compute-0 podman[97799]: 2025-12-09 16:05:18.262778919 +0000 UTC m=+0.027908724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:18 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:18 compute-0 podman[97799]: 2025-12-09 16:05:18.373276691 +0000 UTC m=+0.138406486 container init c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:18 compute-0 podman[97799]: 2025-12-09 16:05:18.381445489 +0000 UTC m=+0.146575284 container start c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bohr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:05:18 compute-0 epic_bohr[97820]: 167 167
Dec 09 16:05:18 compute-0 podman[97799]: 2025-12-09 16:05:18.384968793 +0000 UTC m=+0.150098578 container attach c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:18 compute-0 podman[97799]: 2025-12-09 16:05:18.385657101 +0000 UTC m=+0.150786886 container died c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:05:18 compute-0 systemd[1]: libpod-c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea.scope: Deactivated successfully.
Dec 09 16:05:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e2973499245c59ba9a51fd70a6891a962dd00e373a10f7c863cf9f54043143-merged.mount: Deactivated successfully.
Dec 09 16:05:18 compute-0 podman[97799]: 2025-12-09 16:05:18.431709857 +0000 UTC m=+0.196839642 container remove c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:05:18 compute-0 systemd[1]: libpod-conmon-c824bc4baddbc1861ff0c6104944f134642fe6844d2d8f3882a7e973c54c6fea.scope: Deactivated successfully.
Dec 09 16:05:18 compute-0 podman[97861]: 2025-12-09 16:05:18.586662463 +0000 UTC m=+0.047443015 container create 3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_torvalds, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:05:18 compute-0 systemd[1]: Started libpod-conmon-3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30.scope.
Dec 09 16:05:18 compute-0 podman[97861]: 2025-12-09 16:05:18.560293871 +0000 UTC m=+0.021074403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:18 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b05fbcf95ec67b1ee47021122574b4b7f15b169f3a67311cfaf45206533650/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b05fbcf95ec67b1ee47021122574b4b7f15b169f3a67311cfaf45206533650/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b05fbcf95ec67b1ee47021122574b4b7f15b169f3a67311cfaf45206533650/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b05fbcf95ec67b1ee47021122574b4b7f15b169f3a67311cfaf45206533650/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:18 compute-0 podman[97861]: 2025-12-09 16:05:18.709677308 +0000 UTC m=+0.170457910 container init 3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 09 16:05:18 compute-0 podman[97861]: 2025-12-09 16:05:18.723069794 +0000 UTC m=+0.183850316 container start 3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:05:18 compute-0 podman[97861]: 2025-12-09 16:05:18.727366169 +0000 UTC m=+0.188146781 container attach 3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_torvalds, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:05:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 09 16:05:18 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355894993' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 09 16:05:18 compute-0 zen_carver[97808]: 
Dec 09 16:05:18 compute-0 zen_carver[97808]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Dec 09 16:05:18 compute-0 systemd[1]: libpod-e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4.scope: Deactivated successfully.
Dec 09 16:05:18 compute-0 podman[97776]: 2025-12-09 16:05:18.834918282 +0000 UTC m=+0.692853148 container died e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4 (image=quay.io/ceph/ceph:v20, name=zen_carver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:05:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b33530d83b93dd27f170b4eeba564628717f40a45199f19d3c8f9a297e3f42e8-merged.mount: Deactivated successfully.
Dec 09 16:05:18 compute-0 podman[97776]: 2025-12-09 16:05:18.882599972 +0000 UTC m=+0.740534838 container remove e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4 (image=quay.io/ceph/ceph:v20, name=zen_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 09 16:05:18 compute-0 systemd[1]: libpod-conmon-e71dab5e5a39e9974e7796129d4e6d31efe16cd59a91f80d26c54b060f5747f4.scope: Deactivated successfully.
Dec 09 16:05:18 compute-0 sudo[97746]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]: {
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:     "0": [
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:         {
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "devices": [
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "/dev/loop3"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             ],
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_name": "ceph_lv0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_size": "21470642176",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "name": "ceph_lv0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "tags": {
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.crush_device_class": "",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.encrypted": "0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osd_id": "0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.type": "block",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.vdo": "0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.with_tpm": "0"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             },
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "type": "block",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "vg_name": "ceph_vg0"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:         }
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:     ],
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:     "1": [
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:         {
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "devices": [
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "/dev/loop4"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             ],
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_name": "ceph_lv1",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_size": "21470642176",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "name": "ceph_lv1",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "tags": {
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.crush_device_class": "",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.encrypted": "0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osd_id": "1",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.type": "block",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.vdo": "0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.with_tpm": "0"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             },
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "type": "block",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "vg_name": "ceph_vg1"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:         }
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:     ],
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:     "2": [
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:         {
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "devices": [
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "/dev/loop5"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             ],
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_name": "ceph_lv2",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_size": "21470642176",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "name": "ceph_lv2",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "tags": {
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.cluster_name": "ceph",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.crush_device_class": "",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.encrypted": "0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.objectstore": "bluestore",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osd_id": "2",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.type": "block",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.vdo": "0",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:                 "ceph.with_tpm": "0"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             },
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "type": "block",
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:             "vg_name": "ceph_vg2"
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:         }
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]:     ]
Dec 09 16:05:18 compute-0 quizzical_torvalds[97878]: }
Dec 09 16:05:19 compute-0 systemd[1]: libpod-3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30.scope: Deactivated successfully.
Dec 09 16:05:19 compute-0 podman[97861]: 2025-12-09 16:05:19.020985466 +0000 UTC m=+0.481765978 container died 3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_torvalds, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8b05fbcf95ec67b1ee47021122574b4b7f15b169f3a67311cfaf45206533650-merged.mount: Deactivated successfully.
Dec 09 16:05:19 compute-0 podman[97861]: 2025-12-09 16:05:19.062003078 +0000 UTC m=+0.522783590 container remove 3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:05:19 compute-0 systemd[1]: libpod-conmon-3817e2b1c2fbb27583a9ecd5323b4f9d827acbe61ef6980be5cdf87adc835a30.scope: Deactivated successfully.
Dec 09 16:05:19 compute-0 sudo[97751]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:19 compute-0 sudo[97913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:05:19 compute-0 sudo[97913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:19 compute-0 sudo[97913]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:19 compute-0 sudo[97938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:05:19 compute-0 sudo[97938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:19 compute-0 ceph-mon[75222]: pgmap v93: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 199 B/s rd, 398 B/s wr, 1 op/s
Dec 09 16:05:19 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2355894993' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 09 16:05:19 compute-0 podman[97975]: 2025-12-09 16:05:19.568443572 +0000 UTC m=+0.048619865 container create dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 09 16:05:19 compute-0 systemd[1]: Started libpod-conmon-dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98.scope.
Dec 09 16:05:19 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:19 compute-0 podman[97975]: 2025-12-09 16:05:19.641657281 +0000 UTC m=+0.121833614 container init dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:05:19 compute-0 podman[97975]: 2025-12-09 16:05:19.549106487 +0000 UTC m=+0.029282800 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:19 compute-0 podman[97975]: 2025-12-09 16:05:19.646905001 +0000 UTC m=+0.127081294 container start dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:05:19 compute-0 podman[97975]: 2025-12-09 16:05:19.650107876 +0000 UTC m=+0.130284179 container attach dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 09 16:05:19 compute-0 optimistic_kapitsa[97991]: 167 167
Dec 09 16:05:19 compute-0 systemd[1]: libpod-dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98.scope: Deactivated successfully.
Dec 09 16:05:19 compute-0 podman[97975]: 2025-12-09 16:05:19.652219553 +0000 UTC m=+0.132395886 container died dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8541b598a96bc9e73e2d1c3ee26635513e3b675a54a37454a62122c4e358635-merged.mount: Deactivated successfully.
Dec 09 16:05:19 compute-0 podman[97975]: 2025-12-09 16:05:19.69792715 +0000 UTC m=+0.178103463 container remove dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:05:19 compute-0 systemd[1]: libpod-conmon-dccb69718328eeed620650ecd710defa38e6b99cfff5ff74465aa780b7c03a98.scope: Deactivated successfully.
Dec 09 16:05:19 compute-0 podman[98014]: 2025-12-09 16:05:19.871997203 +0000 UTC m=+0.042056881 container create 9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:05:19 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v94: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 11 KiB/s wr, 233 op/s
Dec 09 16:05:19 compute-0 systemd[1]: Started libpod-conmon-9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b.scope.
Dec 09 16:05:19 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68d3b0946f8cba2e948e0e3ebd47454d5b5f77b8e0528de9577190b32aec480/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:19 compute-0 podman[98014]: 2025-12-09 16:05:19.856439089 +0000 UTC m=+0.026498797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68d3b0946f8cba2e948e0e3ebd47454d5b5f77b8e0528de9577190b32aec480/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68d3b0946f8cba2e948e0e3ebd47454d5b5f77b8e0528de9577190b32aec480/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68d3b0946f8cba2e948e0e3ebd47454d5b5f77b8e0528de9577190b32aec480/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:05:19 compute-0 podman[98014]: 2025-12-09 16:05:19.967675891 +0000 UTC m=+0.137735599 container init 9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:05:19 compute-0 podman[98014]: 2025-12-09 16:05:19.976547417 +0000 UTC m=+0.146607095 container start 9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 09 16:05:19 compute-0 podman[98014]: 2025-12-09 16:05:19.980523543 +0000 UTC m=+0.150583221 container attach 9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:05:20 compute-0 lvm[98112]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:05:20 compute-0 lvm[98108]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:05:20 compute-0 lvm[98112]: VG ceph_vg2 finished
Dec 09 16:05:20 compute-0 lvm[98108]: VG ceph_vg0 finished
Dec 09 16:05:20 compute-0 lvm[98111]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:05:20 compute-0 lvm[98111]: VG ceph_vg1 finished
Dec 09 16:05:20 compute-0 fervent_buck[98031]: {}
Dec 09 16:05:20 compute-0 systemd[1]: libpod-9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b.scope: Deactivated successfully.
Dec 09 16:05:20 compute-0 systemd[1]: libpod-9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b.scope: Consumed 1.338s CPU time.
Dec 09 16:05:20 compute-0 podman[98014]: 2025-12-09 16:05:20.792568163 +0000 UTC m=+0.962627871 container died 9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a68d3b0946f8cba2e948e0e3ebd47454d5b5f77b8e0528de9577190b32aec480-merged.mount: Deactivated successfully.
Dec 09 16:05:20 compute-0 podman[98014]: 2025-12-09 16:05:20.842076121 +0000 UTC m=+1.012135839 container remove 9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:05:20 compute-0 systemd[1]: libpod-conmon-9ca61f865a8c64a45ae75783f9dc8a07ae421855b4b5be343ae5d8f0ab358e9b.scope: Deactivated successfully.
Dec 09 16:05:20 compute-0 sudo[97938]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:05:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:05:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:20 compute-0 sudo[98127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:05:21 compute-0 sudo[98127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:05:21 compute-0 sudo[98127]: pam_unix(sudo:session): session closed for user root
Dec 09 16:05:21 compute-0 ceph-mon[75222]: pgmap v94: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 11 KiB/s wr, 233 op/s
Dec 09 16:05:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:21 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v95: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Dec 09 16:05:23 compute-0 ceph-mon[75222]: pgmap v95: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Dec 09 16:05:23 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v96: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 7.0 KiB/s wr, 155 op/s
Dec 09 16:05:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:25 compute-0 ceph-mon[75222]: pgmap v96: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 7.0 KiB/s wr, 155 op/s
Dec 09 16:05:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:05:25
Dec 09 16:05:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:05:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:05:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec 09 16:05:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:05:25 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v97: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.3 KiB/s wr, 140 op/s
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.866585488756273e-07 of space, bias 4.0, pg target 0.0008239902586507528 quantized to 16 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:05:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:05:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:05:26 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:05:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 09 16:05:27 compute-0 ceph-mon[75222]: pgmap v97: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.3 KiB/s wr, 140 op/s
Dec 09 16:05:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 09 16:05:27 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 09 16:05:27 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev f1160314-d02b-483e-9bec-b1f9965f3cfa (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 09 16:05:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:27 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v99: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 141 op/s
Dec 09 16:05:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 09 16:05:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 09 16:05:28 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 09 16:05:28 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev 5e3e7953-0080-46aa-a455-a4e40caf48d4 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 09 16:05:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:28 compute-0 ceph-mon[75222]: osdmap e47: 3 total, 3 up, 3 in
Dec 09 16:05:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:28 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 48 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=48 pruub=14.252994537s) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active pruub 80.875320435s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:28 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 48 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=48 pruub=14.252994537s) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown pruub 80.875320435s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 09 16:05:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 09 16:05:29 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 09 16:05:29 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev e86f4144-ac21-4bed-9863-b933db1499b9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1f( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1d( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1e( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1c( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.a( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.9( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.6( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.5( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.4( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.3( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.2( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.8( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.7( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.c( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.d( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.b( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.f( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.e( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.10( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.11( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.12( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1b( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.15( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.14( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.16( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.13( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.17( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.18( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.19( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1a( empty local-lis/les=19/20 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1f( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-mon[75222]: pgmap v99: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 141 op/s
Dec 09 16:05:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:29 compute-0 ceph-mon[75222]: osdmap e48: 3 total, 3 up, 3 in
Dec 09 16:05:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:29 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:29 compute-0 ceph-mon[75222]: osdmap e49: 3 total, 3 up, 3 in
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.a( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1e( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1d( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.6( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.9( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.4( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.5( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.3( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.7( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.0( empty local-lis/les=48/49 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1c( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.8( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.c( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.2( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.d( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.f( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.e( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.10( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.b( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.12( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.11( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1b( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.15( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.14( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.13( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.17( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.16( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.1a( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.18( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 49 pg[2.19( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=19/19 les/c/f=20/20/0 sis=48) [2] r=0 lpr=48 pi=[19,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:29 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v102: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:29 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:29 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 09 16:05:29 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 09 16:05:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 09 16:05:30 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:30 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:30 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 09 16:05:30 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 09 16:05:30 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev 3c8df51f-98ab-4793-bdba-8e74d5bf9cf5 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 09 16:05:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Dec 09 16:05:30 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Dec 09 16:05:30 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 50 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=50 pruub=8.530979156s) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active pruub 85.611038208s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:30 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 50 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=50 pruub=8.530979156s) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown pruub 85.611038208s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:30 compute-0 ceph-mon[75222]: 2.1f scrub starts
Dec 09 16:05:30 compute-0 ceph-mon[75222]: 2.1f scrub ok
Dec 09 16:05:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:30 compute-0 ceph-mon[75222]: osdmap e50: 3 total, 3 up, 3 in
Dec 09 16:05:30 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 50 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=50 pruub=13.677585602s) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active pruub 87.438301086s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 50 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=50 pruub=13.677585602s) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown pruub 87.438301086s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 09 16:05:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 09 16:05:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1f( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1e( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1d( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1c( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.8( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.7( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.b( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.6( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1b( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.5( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1a( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.9( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.a( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.19( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.3( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.2( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.4( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.c( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.d( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.e( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.f( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.10( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.11( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.12( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.14( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.13( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.16( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.15( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.17( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.18( empty local-lis/les=23/24 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev aeeddb8e-6e7d-4323-8035-6e30d24c80e0 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 09 16:05:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1f( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1e( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1d( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1f( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1b( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1a( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.19( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.18( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.7( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.6( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.5( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.3( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.8( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1c( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.a( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.b( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.4( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.2( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.c( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.9( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.d( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.e( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.10( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.11( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.f( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.12( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.13( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.14( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.15( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.16( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.17( empty local-lis/les=21/22 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1c( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1d( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.7( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.8( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.b( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1e( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1b( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1f( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1d( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.6( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1b( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.9( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1a( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.5( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.19( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.a( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.3( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.1( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.2( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.4( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.0( empty local-lis/les=50/51 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.e( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.d( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.c( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.f( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.11( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.12( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.15( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.14( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.13( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.18( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.16( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.10( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 51 pg[4.17( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=23/23 les/c/f=24/24/0 sis=50) [0] r=0 lpr=50 pi=[23,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1e( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.18( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.19( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1a( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.7( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.3( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.6( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.5( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.8( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1c( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.a( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.b( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.2( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.0( empty local-lis/les=50/51 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.d( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.c( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.9( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.4( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.10( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.e( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.f( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.1( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.11( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.12( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.13( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.15( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.16( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.17( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 51 pg[3.14( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=21/21 les/c/f=22/22/0 sis=50) [1] r=0 lpr=50 pi=[21,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:31 compute-0 ceph-mon[75222]: pgmap v102: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 09 16:05:31 compute-0 ceph-mon[75222]: osdmap e51: 3 total, 3 up, 3 in
Dec 09 16:05:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:31 compute-0 ceph-mgr[75515]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Dec 09 16:05:31 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v105: 104 pgs: 93 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Dec 09 16:05:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Dec 09 16:05:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 09 16:05:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 09 16:05:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 09 16:05:32 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 09 16:05:32 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev c726f641-f4c9-40e8-9375-2bf4c08475ca (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 09 16:05:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:32 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 52 pg[6.0( v 40'39 (0'0,40'39] local-lis/les=27/28 n=22 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=52 pruub=10.571487427s) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 39'38 mlcod 39'38 active pruub 89.664077759s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:32 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 52 pg[6.0( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=52 pruub=10.571487427s) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 39'38 mlcod 0'0 unknown pruub 89.664077759s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Dec 09 16:05:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 09 16:05:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:32 compute-0 ceph-mon[75222]: osdmap e52: 3 total, 3 up, 3 in
Dec 09 16:05:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:32 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 09 16:05:32 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 09 16:05:32 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 09 16:05:32 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 09 16:05:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 09 16:05:33 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 09 16:05:33 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 52 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=52 pruub=15.547734261s) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active pruub 86.934883118s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:33 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev 47354b20-60b4-446f-a486-299148b9b5ec (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 09 16:05:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:33 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.a( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.5( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.9( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.4( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.8( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.7( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.6( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=52 pruub=15.547734261s) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown pruub 86.934883118s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.2( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.e( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.f( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=27/28 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.c( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.d( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=27/28 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.6( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.5( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.8( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.7( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.10( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.1b( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.1c( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.17( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.4( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.0( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 39'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.2( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.6( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.3( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.c( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.8( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 53 pg[6.e( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=27/27 les/c/f=28/28/0 sis=52) [0] r=0 lpr=52 pi=[27,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.a( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.b( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.d( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.e( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 53 pg[5.1f( empty local-lis/les=25/26 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:33 compute-0 ceph-mon[75222]: pgmap v105: 104 pgs: 93 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:33 compute-0 ceph-mon[75222]: 4.1f scrub starts
Dec 09 16:05:33 compute-0 ceph-mon[75222]: 4.1f scrub ok
Dec 09 16:05:33 compute-0 ceph-mon[75222]: 2.a scrub starts
Dec 09 16:05:33 compute-0 ceph-mon[75222]: 2.a scrub ok
Dec 09 16:05:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:33 compute-0 ceph-mon[75222]: osdmap e53: 3 total, 3 up, 3 in
Dec 09 16:05:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:33 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v108: 150 pgs: 108 unknown, 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:33 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:33 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 09 16:05:34 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:34 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:34 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 09 16:05:34 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 09 16:05:34 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev b3be15cb-8e13-4a07-8b5b-ed2070e0cfe8 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 09 16:05:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:34 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.10( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.1f( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.17( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.a( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.8( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.b( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.0( empty local-lis/les=52/54 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.6( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.1c( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.d( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.1b( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.e( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=25/25 les/c/f=26/26/0 sis=52) [2] r=0 lpr=52 pi=[25,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:34 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 09 16:05:34 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 09 16:05:34 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:34 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:34 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:34 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:34 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:34 compute-0 ceph-mon[75222]: osdmap e54: 3 total, 3 up, 3 in
Dec 09 16:05:34 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:34 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=54 pruub=10.441409111s) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active pruub 87.512413025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:34 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 54 pg[8.0( v 39'6 (0'0,39'6] local-lis/les=38/39 n=6 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54 pruub=14.171293259s) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 39'5 mlcod 39'5 active pruub 91.242446899s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:34 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=54 pruub=10.441409111s) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown pruub 87.512413025s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:34 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 54 pg[8.0( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54 pruub=14.171293259s) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 39'5 mlcod 0'0 unknown pruub 91.242446899s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 09 16:05:35 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 09 16:05:35 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 09 16:05:35 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev cda49a30-0c9f-410a-a50e-ede450d597db (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 09 16:05:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 09 16:05:35 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1e( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.12( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1d( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1c( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1f( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.18( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.10( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.17( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.19( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.16( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1a( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1b( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.14( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.4( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.b( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.5( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.6( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.7( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.d( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.2( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.9( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.b( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.f( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1( v 39'6 (0'0,39'6] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.3( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.a( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.8( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.7( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.e( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.c( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.d( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.13( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1d( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.12( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.11( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.10( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1e( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.17( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.16( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.19( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.15( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.14( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=29/30 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1d( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.12( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1c( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1f( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1e( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.17( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.10( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.18( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.19( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.16( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1a( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.14( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.4( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.5( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1b( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.b( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.6( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.7( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.2( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.0( empty local-lis/les=54/55 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.9( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.b( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.0( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 39'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.f( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.1( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.d( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.a( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.3( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.8( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.7( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.e( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.d( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.13( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.c( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1d( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.12( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1e( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.11( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.17( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.16( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.19( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.10( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.15( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[8.14( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=29/29 les/c/f=30/30/0 sis=54) [1] r=0 lpr=54 pi=[29,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:35 compute-0 ceph-mon[75222]: pgmap v108: 150 pgs: 108 unknown, 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:35 compute-0 ceph-mon[75222]: 3.1f scrub starts
Dec 09 16:05:35 compute-0 ceph-mon[75222]: 3.1f scrub ok
Dec 09 16:05:35 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:35 compute-0 ceph-mon[75222]: osdmap e55: 3 total, 3 up, 3 in
Dec 09 16:05:35 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 09 16:05:35 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v111: 212 pgs: 93 unknown, 119 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:35 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:35 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:35 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 09 16:05:35 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 09 16:05:36 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 09 16:05:36 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 09 16:05:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 09 16:05:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 09 16:05:36 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] update: starting ev 2acb3d00-34cc-4f9c-8e34-984fdd3b6cae (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev f1160314-d02b-483e-9bec-b1f9965f3cfa (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event f1160314-d02b-483e-9bec-b1f9965f3cfa (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev 5e3e7953-0080-46aa-a455-a4e40caf48d4 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 5e3e7953-0080-46aa-a455-a4e40caf48d4 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev e86f4144-ac21-4bed-9863-b933db1499b9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event e86f4144-ac21-4bed-9863-b933db1499b9 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev 3c8df51f-98ab-4793-bdba-8e74d5bf9cf5 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 3c8df51f-98ab-4793-bdba-8e74d5bf9cf5 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev aeeddb8e-6e7d-4323-8035-6e30d24c80e0 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event aeeddb8e-6e7d-4323-8035-6e30d24c80e0 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev c726f641-f4c9-40e8-9375-2bf4c08475ca (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event c726f641-f4c9-40e8-9375-2bf4c08475ca (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev 47354b20-60b4-446f-a486-299148b9b5ec (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 47354b20-60b4-446f-a486-299148b9b5ec (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev b3be15cb-8e13-4a07-8b5b-ed2070e0cfe8 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event b3be15cb-8e13-4a07-8b5b-ed2070e0cfe8 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev cda49a30-0c9f-410a-a50e-ede450d597db (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event cda49a30-0c9f-410a-a50e-ede450d597db (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] complete: finished ev 2acb3d00-34cc-4f9c-8e34-984fdd3b6cae (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 2acb3d00-34cc-4f9c-8e34-984fdd3b6cae (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 09 16:05:36 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 56 pg[10.0( v 46'18 (0'0,46'18] local-lis/les=42/43 n=9 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=56 pruub=8.339349747s) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 46'17 mlcod 46'17 active pruub 82.744361877s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:36 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 56 pg[10.0( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=56 pruub=8.339349747s) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 46'17 mlcod 0'0 unknown pruub 82.744361877s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:36 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:36 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:36 compute-0 ceph-mon[75222]: 2.1d scrub starts
Dec 09 16:05:36 compute-0 ceph-mon[75222]: 2.1d scrub ok
Dec 09 16:05:36 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 09 16:05:36 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:36 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:36 compute-0 ceph-mon[75222]: osdmap e56: 3 total, 3 up, 3 in
Dec 09 16:05:36 compute-0 ceph-mgr[75515]: [progress INFO root] Writing back 15 completed events
Dec 09 16:05:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 16:05:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:36 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 56 pg[9.0( v 46'483 (0'0,46'483] local-lis/les=40/41 n=210 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=56 pruub=13.888373375s) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 46'482 mlcod 46'482 active pruub 93.261390686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:36 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 56 pg[9.0( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=56 pruub=13.888373375s) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 46'482 mlcod 0'0 unknown pruub 93.261390686s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1eea80 space 0x56402a7ba840 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e3400 space 0x564029a65a40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2c9080 space 0x564029a97d40 0x0~98 clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e3600 space 0x564029971140 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a248b00 space 0x564029a66b40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1f7500 space 0x56402996ab40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2b5d00 space 0x5640294bfd40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e2680 space 0x56402a363d40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e2b00 space 0x56402a351140 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e3700 space 0x56402a362e40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2c9900 space 0x5640294eda40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1eef00 space 0x5640294bf440 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2b9f80 space 0x564029a66240 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e3f00 space 0x564029abab40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2b8f00 space 0x56402a1ad440 0x0~98 clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a243680 space 0x564029a96540 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a24c600 space 0x56402a11cb40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a248700 space 0x5640299e2540 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a243d80 space 0x5640294ed440 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1f7780 space 0x56402a350840 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2c9600 space 0x564029a9a840 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a243480 space 0x564029a96e40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2acc80 space 0x5640294beb40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e3580 space 0x5640294ec240 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2c9d00 space 0x5640298cda40 0x0~98 clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2c9e80 space 0x564029444240 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1a5880 space 0x564028cf4e40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a256600 space 0x564029a9ba40 0x0~98 clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1a4d80 space 0x56402a6a3740 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a248180 space 0x5640299e4e40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1a4f80 space 0x564029a23740 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a243100 space 0x5640294ecb40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a249500 space 0x564029970240 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2b9080 space 0x56402a34ab40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e2700 space 0x564029445440 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1a4e00 space 0x5640299cb440 0x0~98 clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2ac280 space 0x564029aba240 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a256500 space 0x5640299ca540 0x0~98 clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1eed80 space 0x564029970b40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2c8280 space 0x56402a621740 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1e3b00 space 0x564029abba40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a243980 space 0x56402a363740 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a24c280 space 0x564029975d40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2ac880 space 0x5640294be240 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1a5000 space 0x564029aa9740 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2b8b00 space 0x56402a11da40 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a249d80 space 0x56402a76d140 0x0~98 clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a1f7300 space 0x56402a6d9140 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a256780 space 0x56402a7ba240 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2c8c80 space 0x564029a9b140 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a243280 space 0x564029974240 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a248200 space 0x564029a92540 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a154d00 space 0x564029a67740 0x0~9a clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a264380 space 0x56402a32b440 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a242d00 space 0x564029a93740 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2c8180 space 0x564029444b40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2b8d80 space 0x56402a37a240 0x0~98 clean)
Dec 09 16:05:36 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56402a1b06c0) split_cache   moving buffer(0x56402a2da000 space 0x564029a92e40 0x0~6e clean)
Dec 09 16:05:36 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 09 16:05:36 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 09 16:05:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 09 16:05:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 09 16:05:37 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 09 16:05:37 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.11( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.10( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.12( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.15( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.14( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.17( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.16( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.11( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1f( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.10( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.12( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.13( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.d( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.c( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.f( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.9( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.b( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1e( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.2( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1b( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1c( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1a( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.a( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.3( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.8( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.e( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.6( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.18( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.7( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.4( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.5( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1a( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1b( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.19( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.18( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1e( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.19( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1f( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1d( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1c( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1d( v 46'483 lc 0'0 (0'0,46'483] local-lis/les=40/41 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.7( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.6( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.5( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.4( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.3( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.f( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.9( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.a( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.b( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.c( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.d( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.8( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1( v 46'18 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.2( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.13( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.14( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.16( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.14( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.10( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.12( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.2( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.0( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 46'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.15( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.17( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.12( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.e( v 46'18 lc 0'0 (0'0,46'18] local-lis/les=42/43 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.11( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1e( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1f( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1b( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.10( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1a( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1c( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.18( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1d( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.6( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.19( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.4( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.5( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.7( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.f( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.3( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.9( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.a( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.0( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 46'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.c( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.b( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.d( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.1( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.8( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.2( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.13( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.16( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.15( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.17( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.14( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 57 pg[10.e( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=42/42 les/c/f=43/43/0 sis=56) [2] r=0 lpr=56 pi=[42,56)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.e( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.a( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.5( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1a( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.18( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.4( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1e( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 57 pg[9.1c( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=40/40 les/c/f=41/41/0 sis=56) [1] r=0 lpr=56 pi=[40,56)/1 crt=46'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:37 compute-0 ceph-mon[75222]: pgmap v111: 212 pgs: 93 unknown, 119 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:37 compute-0 ceph-mon[75222]: 3.1d scrub starts
Dec 09 16:05:37 compute-0 ceph-mon[75222]: 3.1d scrub ok
Dec 09 16:05:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:37 compute-0 ceph-mon[75222]: 2.1e scrub starts
Dec 09 16:05:37 compute-0 ceph-mon[75222]: 2.1e scrub ok
Dec 09 16:05:37 compute-0 ceph-mon[75222]: osdmap e57: 3 total, 3 up, 3 in
Dec 09 16:05:37 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 09 16:05:37 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 09 16:05:37 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v114: 274 pgs: 155 unknown, 119 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 16:05:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:38 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 09 16:05:38 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 09 16:05:38 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 09 16:05:38 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 09 16:05:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 09 16:05:38 compute-0 ceph-mon[75222]: 4.1d scrub starts
Dec 09 16:05:38 compute-0 ceph-mon[75222]: 4.1d scrub ok
Dec 09 16:05:38 compute-0 ceph-mon[75222]: 2.6 scrub starts
Dec 09 16:05:38 compute-0 ceph-mon[75222]: 2.6 scrub ok
Dec 09 16:05:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 09 16:05:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 09 16:05:38 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 09 16:05:38 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=15.998214722s) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active pruub 97.305633545s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:38 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=15.998214722s) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown pruub 97.305633545s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:38 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 09 16:05:38 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 09 16:05:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:39 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 09 16:05:39 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 09 16:05:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 09 16:05:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 09 16:05:39 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.15( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.11( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.10( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.d( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.b( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.9( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.2( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.c( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.6( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.18( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1f( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.15( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:39 compute-0 ceph-mon[75222]: pgmap v114: 274 pgs: 155 unknown, 119 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:39 compute-0 ceph-mon[75222]: 3.1b scrub starts
Dec 09 16:05:39 compute-0 ceph-mon[75222]: 3.1b scrub ok
Dec 09 16:05:39 compute-0 ceph-mon[75222]: 4.1c scrub starts
Dec 09 16:05:39 compute-0 ceph-mon[75222]: 4.1c scrub ok
Dec 09 16:05:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 16:05:39 compute-0 ceph-mon[75222]: osdmap e58: 3 total, 3 up, 3 in
Dec 09 16:05:39 compute-0 ceph-mon[75222]: 2.9 scrub starts
Dec 09 16:05:39 compute-0 ceph-mon[75222]: 2.9 scrub ok
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.11( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.d( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=58/59 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.9( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.10( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.2( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.c( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.6( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.18( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:39 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:40 compute-0 ceph-mon[75222]: 4.8 scrub starts
Dec 09 16:05:40 compute-0 ceph-mon[75222]: 4.8 scrub ok
Dec 09 16:05:40 compute-0 ceph-mon[75222]: osdmap e59: 3 total, 3 up, 3 in
Dec 09 16:05:41 compute-0 ceph-mon[75222]: pgmap v117: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:41 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 09 16:05:41 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 09 16:05:41 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:41 compute-0 sshd-session[98153]: Invalid user admin from 146.190.31.45 port 57436
Dec 09 16:05:42 compute-0 sshd-session[98153]: Connection closed by invalid user admin 146.190.31.45 port 57436 [preauth]
Dec 09 16:05:42 compute-0 ceph-mon[75222]: 2.5 scrub starts
Dec 09 16:05:42 compute-0 ceph-mon[75222]: 2.5 scrub ok
Dec 09 16:05:43 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 09 16:05:43 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 09 16:05:43 compute-0 ceph-mon[75222]: pgmap v118: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:43 compute-0 ceph-mon[75222]: 3.18 scrub starts
Dec 09 16:05:43 compute-0 ceph-mon[75222]: 3.18 scrub ok
Dec 09 16:05:43 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:44 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 09 16:05:44 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 09 16:05:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:44 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 09 16:05:44 compute-0 ceph-mon[75222]: 3.19 scrub starts
Dec 09 16:05:44 compute-0 ceph-mon[75222]: 3.19 scrub ok
Dec 09 16:05:44 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 09 16:05:44 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 09 16:05:44 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 09 16:05:45 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 09 16:05:45 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 09 16:05:45 compute-0 ceph-mon[75222]: pgmap v119: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:45 compute-0 ceph-mon[75222]: 4.7 scrub starts
Dec 09 16:05:45 compute-0 ceph-mon[75222]: 4.7 scrub ok
Dec 09 16:05:45 compute-0 ceph-mon[75222]: 2.3 scrub starts
Dec 09 16:05:45 compute-0 ceph-mon[75222]: 2.3 scrub ok
Dec 09 16:05:45 compute-0 ceph-mon[75222]: 3.1a scrub starts
Dec 09 16:05:45 compute-0 ceph-mon[75222]: 3.1a scrub ok
Dec 09 16:05:45 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 09 16:05:45 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 09 16:05:45 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:05:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mgr[75515]: [progress INFO root] Completed event 0872fc74-acb3-43e6-b937-75f038878cc4 (Global Recovery Event) in 15 seconds
Dec 09 16:05:46 compute-0 systemd[76651]: Starting Mark boot as successful...
Dec 09 16:05:46 compute-0 systemd[76651]: Finished Mark boot as successful.
Dec 09 16:05:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 09 16:05:46 compute-0 ceph-mon[75222]: 2.7 scrub starts
Dec 09 16:05:46 compute-0 ceph-mon[75222]: 2.7 scrub ok
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 09 16:05:46 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.12( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836873055s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 active pruub 99.413658142s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.1d( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.821227074s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398040771s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.12( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836809158s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 unknown NOTIFY pruub 99.413658142s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.1d( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.821170807s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398040771s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.19( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.800035477s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.376937866s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.19( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.799987793s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.376937866s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.1e( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820962906s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398033142s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.18( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.798814774s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.376022339s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.11( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836464882s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.413673401s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.17( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.798867226s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.376113892s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.10( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838508606s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.415763855s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.18( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.798781395s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.376022339s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.11( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836410522s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.413673401s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.17( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.798832893s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.376113892s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.1e( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820643425s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398033142s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.10( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838483810s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.415763855s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.1e( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838112831s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.415672302s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.12( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820355415s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398139954s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.12( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820337296s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398139954s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.11( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820209503s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398117065s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.19( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.11( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820119858s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398117065s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.1e( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838084221s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.415672302s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.15( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.797780991s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375907898s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.15( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.797760963s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375907898s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.13( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820184708s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398391724s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.14( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820388794s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398773193s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.13( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820164680s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398391724s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.14( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.820372581s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398773193s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.16( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.798348427s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375953674s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.15( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.819672585s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398269653s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.15( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.819655418s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398269653s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.13( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.797269821s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375938416s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.16( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.797341347s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375953674s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.1a( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837014198s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.415771484s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.19( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837409019s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.416183472s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.11( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.797062874s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375846863s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.13( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.796998024s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375938416s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.16( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.819314957s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398307800s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.16( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.819274902s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398307800s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.7( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837016106s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.416381836s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.7( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836828232s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.416381836s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.19( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836460114s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.416183472s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.6( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836031914s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.416000366s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.6( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.835988045s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.416000366s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.11( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.795747757s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375846863s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.9( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.818198204s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398376465s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.9( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.818167686s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398376465s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.d( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.794824600s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375114441s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.d( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.794800758s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375114441s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.f( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.795457840s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375404358s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.f( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.794875145s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375404358s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.4( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.835509300s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.416206360s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.b( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.794929504s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375656128s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.b( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.794892311s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375656128s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.4( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.835457802s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.416206360s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.1a( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.834886551s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.415771484s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.8( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.840164185s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.421394348s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.c( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.817646980s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398910522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.1c( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803501129s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.092903137s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.8( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.840127945s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.421394348s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.c( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.817625046s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398910522s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.1c( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803469658s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.092903137s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.7( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.817033768s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398506165s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.5( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.816205978s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 active pruub 104.105796814s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.7( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.816994667s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398506165s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.7( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803263664s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.092857361s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.f( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.834941864s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.416511536s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.8( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803346634s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.092956543s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.5( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.816181183s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.105796814s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.f( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.834913254s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.416511536s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.7( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803230286s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.092857361s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.7( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.792988777s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374679565s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.8( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803295135s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.092956543s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.7( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.792967796s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374679565s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.815980911s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 active pruub 104.105781555s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.f( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.816672325s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398544312s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.815960884s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.105781555s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.8( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.793141365s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375007629s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.1b( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803323746s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093162537s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.f( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.816654205s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398544312s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.1b( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803306580s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093162537s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.8( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.793114662s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375007629s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.7( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.815879822s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 active pruub 104.105857849s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.9( v 58'19 (0'0,58'19] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.834584236s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 active pruub 99.416618347s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.7( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.815859795s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.105857849s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.1a( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803277969s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093376160s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.1a( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803200722s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093376160s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.9( v 58'19 (0'0,58'19] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.834551811s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 unknown NOTIFY pruub 99.416618347s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.a( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803291321s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093521118s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.5( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803250313s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093498230s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.5( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803205490s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093498230s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.2( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.792857170s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375122070s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.5( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.816328049s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398612976s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.a( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.803247452s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093521118s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.2( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.792827606s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375122070s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.816222191s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 active pruub 104.106651306s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.5( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.816308975s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398612976s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.816202164s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.106651306s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.3( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.792141914s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374595642s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.3( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.818210602s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 active pruub 104.108856201s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.3( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.818192482s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.108856201s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.3( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.792087555s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374595642s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.4( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.815989494s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398551941s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.1( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802611351s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093551636s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.4( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802657127s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093635559s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.2( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802576065s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093566895s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.1( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802559853s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093551636s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.4( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802635193s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093635559s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.2( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802557945s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093566895s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.817642212s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 active pruub 104.108886719s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.d( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802588463s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093841553s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.817621231s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.108886719s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.d( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802568436s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093841553s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.e( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802278519s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093711853s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.817510605s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 active pruub 104.108963013s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.e( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802261353s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093711853s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.817490578s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.108963013s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.f( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802292824s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093864441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.f( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802280426s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093864441s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.10( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802639008s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.094276428s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.10( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802621841s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.094276428s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.11( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802233696s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093971252s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.11( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802216530s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093971252s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.12( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802227020s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.094062805s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.13( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802268982s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.094123840s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.12( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802208900s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.094062805s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.817039490s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 active pruub 104.108901978s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.13( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802253723s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.094123840s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=10.816984177s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.108901978s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.14( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802153587s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.094146729s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.14( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802136421s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.094146729s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.18( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802043915s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.094139099s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.18( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.802031517s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.094139099s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.9( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.801071167s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 102.093261719s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[4.9( empty local-lis/les=50/51 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.801055908s) [1] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 102.093261719s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.18( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[5.1e( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[5.14( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.4( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.815892220s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398551941s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.4( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.791777611s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374549866s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.b( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837651253s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.420440674s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.4( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.791760445s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374549866s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.b( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837624550s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.420440674s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.d( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838457108s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 active pruub 99.421363831s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[5.15( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.3( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.815632820s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398559570s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.d( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838427544s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 unknown NOTIFY pruub 99.421363831s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.3( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.815594673s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398559570s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.5( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.791557312s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374549866s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.5( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.791533470s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374549866s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.16( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.2( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.815518379s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398590088s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.2( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.815503120s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398590088s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.6( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.791400909s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374572754s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.6( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.791379929s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374572754s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.1( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.815446854s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398658752s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.e( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838340759s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 active pruub 99.421546936s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.1( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.815431595s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398658752s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.e( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838281631s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 unknown NOTIFY pruub 99.421546936s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.1( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.838018417s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.421379089s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.9( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.791140556s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374519348s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.1( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837995529s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.421379089s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.9( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.791119576s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374519348s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.a( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.790833473s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374374390s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.a( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.790815353s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374374390s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.2( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837818146s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.421401978s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.2( v 46'18 (0'0,46'18] local-lis/les=56/57 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837789536s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.421401978s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.13( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.13( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837749481s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.421463013s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.13( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837736130s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.421463013s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.14( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837625504s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 active pruub 99.421470642s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.14( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837598801s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 unknown NOTIFY pruub 99.421470642s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.1c( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.790850639s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374877930s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.1c( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.790836334s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374877930s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.1a( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.814609528s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398742676s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.1a( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.814599037s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398742676s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.15( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837300301s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 active pruub 99.421470642s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.15( v 58'19 (0'0,58'19] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.837251663s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 46'18 unknown NOTIFY pruub 99.421470642s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.1d( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.790152550s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.374412537s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.1d( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.790135384s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.374412537s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.19( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.814275742s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398750305s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.19( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.814247131s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398750305s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.17( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836894035s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.421493530s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.17( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836875916s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.421493530s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.16( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836831093s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 active pruub 99.421463013s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.11( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.18( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.814059258s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 active pruub 96.398788452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.1f( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.784831047s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.369567871s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[10.16( v 46'18 (0'0,46'18] local-lis/les=56/57 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.836771965s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 unknown NOTIFY pruub 99.421463013s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[5.18( empty local-lis/les=52/54 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60 pruub=11.814044952s) [1] r=-1 lpr=60 pi=[52,60)/1 crt=0'0 unknown NOTIFY pruub 96.398788452s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.1f( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.784779549s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.369567871s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.1b( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.790877342s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 99.375839233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[2.1b( empty local-lis/les=48/49 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=14.790848732s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 99.375839233s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.1c( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.1b( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.f( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.11( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.10( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.a( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.1a( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[5.7( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.17( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.8( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.1( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.2( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.13( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.12( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[5.5( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.e( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.11( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.12( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.15( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.14( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.16( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.13( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[4.18( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[5.4( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.9( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.8( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[6.b( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.9( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[5.3( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[6.9( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.d( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.f( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[5.2( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[6.7( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.5( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.3( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.7( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[6.1( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.5( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.a( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.b( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[6.f( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.d( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.1c( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.c( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.f( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.9( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.4( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.1d( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.4( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[4.2( empty local-lis/les=0/0 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.4( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[6.3( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.7( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.6( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.1( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.11( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.10( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[2.1b( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[10.1( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.12( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[2.1f( empty local-lis/les=0/0 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.1d( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.1a( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.18( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[5.19( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.886249542s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040168762s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.809034348s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.963020325s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1f( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774824142s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.928833008s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.886178970s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040168762s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1f( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774784088s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.928833008s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1e( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.779326439s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.933601379s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1a( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.808467865s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.962951660s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1e( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.779121399s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.933601379s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1a( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.808446884s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.962951660s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.15( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.808293343s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.962921143s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.15( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.808277130s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.962921143s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.17( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.1f( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.14( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.808204651s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.962959290s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.882685661s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.037483215s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1d( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774059296s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.928863525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.14( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.808153152s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.962959290s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1d( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774037361s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.928863525s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.882663727s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.037483215s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805681229s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.960525513s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805621147s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.960525513s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.808137894s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.963020325s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.808247566s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.963409424s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.882354736s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.037521362s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.807692528s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.962875366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.808222771s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.963409424s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.882330894s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.037521362s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.807667732s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.962875366s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.803070068s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.958412170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1b( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.773488998s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.928871155s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.803041458s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.958412170s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.808012962s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.963424683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.10( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.807439804s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.962852478s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.807990074s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.963424683s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.10( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.807416916s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.962852478s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1b( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.773425102s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.928871155s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.1b( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.881970406s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.037544250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.881953239s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.037544250s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.11( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.802749634s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.958435059s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.884202957s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.039978027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.11( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.802683830s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.958435059s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.884184837s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.039978027s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.884295464s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040283203s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.884273529s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040283203s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.807446480s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.963516235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.807377815s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.963516235s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.18( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.777548790s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.933776855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.12( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.802149773s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.958389282s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.18( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.777524948s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.933776855s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.12( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.802123070s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.958389282s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1c( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.801299095s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.957695007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1c( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.801282883s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.957695007s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.800899506s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.957656860s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.800837517s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.957656860s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.7( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.776766777s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.933677673s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.7( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.776741028s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.933677673s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.806329727s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.963508606s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.c( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.800516129s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.957702637s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.882788658s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.039993286s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.c( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.800494194s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.957702637s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.882767677s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.039993286s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.800210953s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.957611084s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.800193787s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.957611084s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.806086540s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.963508606s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.6( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.776370049s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.933906555s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.6( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.776302338s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.933906555s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.d( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.799962044s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.957672119s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.d( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.799937248s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.957672119s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.882157326s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040000916s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.882134438s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040000916s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.799700737s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.957588196s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.1( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.799678802s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.957588196s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.5( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.776017189s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.933944702s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.e( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.799528122s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.957580566s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.e( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.799504280s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.957580566s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805487633s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.963737488s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.5( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.775716782s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.933944702s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805464745s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.963737488s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.881930351s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040260315s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.881851196s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040260315s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.3( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.775381088s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.933868408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.3( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.775359154s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.933868408s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805081367s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.963638306s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.881637573s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040260315s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805047035s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.963638306s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.796530724s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.955184937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.881613731s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040260315s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.796507835s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.955184937s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.775778770s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934600830s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.804930687s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.963775635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.1( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.775753975s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934600830s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.804906845s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.963775635s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.c( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.796234131s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.955146790s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.8( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.775012016s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.933975220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.8( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774989128s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.933975220s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.c( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.796210289s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.955146790s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.881173134s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040298462s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.881153107s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040298462s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.795877457s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.955093384s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.795856476s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.955093384s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.a( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774827003s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934181213s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.a( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774806976s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934181213s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.880907059s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040321350s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.795547485s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.955017090s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.795526505s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.955017090s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.807726860s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.967300415s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.807708740s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.967300415s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.880735397s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040321350s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.880575180s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040351868s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.f( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.795179367s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954963684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.f( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.795155525s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954963684s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.880554199s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040351868s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.795045853s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.954910278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.b( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794862747s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954864502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794935226s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.954910278s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.18( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.b( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794840813s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954864502s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794611931s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.954841614s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794589996s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.954841614s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.9( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794493675s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954772949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.9( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794459343s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954772949s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.879997253s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040412903s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.9( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774036407s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934463501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.9( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.774010658s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934463501s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.2( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794275284s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954757690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.879968643s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040412903s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.2( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.794253349s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954757690s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.806546211s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.967163086s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.879755020s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040390015s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.879734993s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040390015s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793970108s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.954681396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793950081s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.954681396s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.806523323s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.967163086s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.c( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.773523331s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934333801s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.c( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.773493767s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934333801s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793641090s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.954658508s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.1f( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.1b( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.6( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793650627s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954696655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793619156s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.954658508s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.807090759s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.968170166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.10( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.807055473s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.968170166s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.6( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793583870s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954696655s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793466568s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.954727173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793446541s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.954727173s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.879198074s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040504456s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.e( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.773225784s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934547424s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.e( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.773203850s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934547424s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.879172325s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040504456s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.f( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.773004532s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934562683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.4( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793078423s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954650879s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.f( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.772982597s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934562683s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.4( v 39'6 (0'0,39'6] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.793056488s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954650879s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.878727913s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040557861s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.878707886s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040557861s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.5( v 58'484 (0'0,58'484] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805529594s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 46'483 active pruub 103.967521667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1b( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.792607307s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954620361s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1b( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.792583466s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954620361s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.5( v 58'484 (0'0,58'484] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805472374s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 46'483 unknown NOTIFY pruub 103.967521667s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.878274918s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040512085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.15( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.792324066s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.954582214s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.878240585s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040512085s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.11( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.772173882s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934608459s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1a( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.792106628s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954574585s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.15( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.792123795s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.954582214s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.11( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.772137642s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934608459s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.877690315s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040199280s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1a( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.791975021s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954574585s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.877620697s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040199280s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.12( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.771844864s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934616089s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.12( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.771821976s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934616089s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805279732s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.968139648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.877700806s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040588379s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.877581596s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.040534973s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805191994s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.968139648s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.877558708s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040534973s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.18( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.791453362s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954521179s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.18( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.791382790s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954521179s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.805019379s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.968177795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.804998398s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.968177795s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.879046440s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.042282104s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.879013062s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.042282104s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1f( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.791119576s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954483032s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1f( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.791101456s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954483032s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.790625572s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.954452515s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.790605545s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.954452515s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.804474831s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.968185425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.15( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.770880699s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934852600s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.804214478s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.968185425s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.15( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.770861626s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934852600s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.877008438s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.040588379s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.878070831s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.042274475s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.16( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.770638466s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934867859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.878046989s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.042274475s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.16( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.770619392s) [2] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934867859s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1d( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.789855003s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954246521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.877861023s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 active pruub 98.042282104s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=8.877840996s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 unknown NOTIFY pruub 98.042282104s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.803751945s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 active pruub 103.968223572s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=14.803731918s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 103.968223572s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.789631844s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 active pruub 101.954223633s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.3( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.1e( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.c( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.1a( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[8.15( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.1d( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.15( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.12( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[8.11( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.11( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.18( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.17( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.770256996s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 active pruub 97.934883118s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1c( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.789795876s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 active pruub 101.954475403s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.789549828s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=0'0 unknown NOTIFY pruub 101.954223633s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[3.17( empty local-lis/les=50/51 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=8.770178795s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=0'0 unknown NOTIFY pruub 97.934883118s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1c( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.789772987s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954475403s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 60 pg[8.1d( v 39'6 (0'0,39'6] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=12.789558411s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 unknown NOTIFY pruub 101.954246521s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.6( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.7( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.e( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.2( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.e( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.1c( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[8.d( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.3( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.d( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.1( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.5( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.b( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.a( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.5( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.f( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.1( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.f( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.1( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.6( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.9( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.4( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.8( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.c( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.9( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.2( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.9( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.e( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.3( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.8( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[8.2( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.8( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.a( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.c( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.9( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.e( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.18( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.6( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.6( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.15( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.11( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.f( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.1a( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.19( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.1c( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[7.11( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.1b( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.1a( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.1e( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[3.16( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[11.1f( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 60 pg[8.1c( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=0/0 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.12( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.1f( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.15( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[3.17( empty local-lis/les=0/0 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[7.13( empty local-lis/les=0/0 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 60 pg[8.1d( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-mon[75222]: pgmap v120: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:05:47 compute-0 ceph-mon[75222]: osdmap e60: 3 total, 3 up, 3 in
Dec 09 16:05:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 09 16:05:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 09 16:05:47 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.5( v 58'484 (0'0,58'484] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 46'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.5( v 58'484 (0'0,58'484] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 46'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.19( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.18( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.18( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.1b( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.1f( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.14( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.18( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.1f( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.1b( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.10( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.10( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[5.1e( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.19( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.18( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.f( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.9( v 58'19 lc 43'8 (0'0,58'19] local-lis/les=60/61 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=58'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.4( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.c( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[5.7( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.1( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.1d( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.b( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.4( v 46'18 (0'0,46'18] local-lis/les=60/61 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.15( v 58'19 lc 43'3 (0'0,58'19] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=58'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.1c( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.14( v 58'19 lc 43'7 (0'0,58'19] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=58'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.1d( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.10( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.12( v 58'19 lc 46'17 (0'0,58'19] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=58'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.1b( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.13( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.1a( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.11( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.6( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.1( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[6.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=60/61 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=40'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.7( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.f( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.4( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.2( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.4( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.9( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.f( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.c( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.9( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[5.4( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.1c( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.8( v 46'18 (0'0,46'18] local-lis/les=60/61 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.f( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.7( v 46'18 (0'0,46'18] local-lis/les=60/61 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.6( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.9( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.2( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[5.5( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.3( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.6( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.6( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=60/61 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.1f( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.17( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.d( v 58'19 lc 43'5 (0'0,58'19] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=58'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.f( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[6.d( v 40'39 lc 39'13 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.d( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.a( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[6.f( v 40'39 lc 39'1 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=60/61 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.2( v 46'18 (0'0,46'18] local-lis/les=60/61 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.5( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.3( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.c( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.6( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[5.3( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.e( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.e( v 58'19 lc 43'4 (0'0,58'19] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=58'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[6.5( v 40'39 lc 39'11 (0'0,40'39] local-lis/les=60/61 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.11( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.2( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.e( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.9( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.5( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[8.12( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=39'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[8.d( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.b( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.5( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.1( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.d( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.c( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.8( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.7( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[8.11( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.12( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.1d( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.15( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[8.15( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.1e( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.1a( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.18( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.1b( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[8.2( v 39'6 (0'0,39'6] local-lis/les=60/61 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.8( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.1a( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.e( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.a( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.2( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.18( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[8.1b( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[8.4( v 39'6 (0'0,39'6] local-lis/les=60/61 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.e( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.1( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.1a( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.15( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.11( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.1b( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[7.11( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.1c( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.a( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.1e( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[3.16( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[11.1f( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[8.1c( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [2] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.13( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.11( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 61 pg[4.1c( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [2] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.7( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.b( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.3( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.5( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[6.7( v 40'39 lc 39'21 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.f( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.d( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.9( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[6.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=40'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.9( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.6( v 46'18 (0'0,46'18] local-lis/les=60/61 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.16( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.14( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.19( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.15( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.12( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.8( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[2.17( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.13( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[10.1a( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.10( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[5.11( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 61 pg[4.12( empty local-lis/les=60/61 n=0 ec=50/23 lis/c=50/50 les/c/f=51/51/0 sis=60) [1] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[5.2( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.f( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.b( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.a( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.8( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.9( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[7.13( empty local-lis/les=60/61 n=0 ec=54/29 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.17( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.16( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.1e( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.1d( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.15( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.1( v 46'18 (0'0,46'18] local-lis/les=60/61 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.1f( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.18( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[5.15( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.13( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[3.12( empty local-lis/les=60/61 n=0 ec=50/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=60/61 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[8.1a( v 39'6 (0'0,39'6] local-lis/les=60/61 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[2.11( empty local-lis/les=60/61 n=0 ec=48/19 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[10.16( v 46'18 (0'0,46'18] local-lis/les=60/61 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 61 pg[5.14( empty local-lis/les=60/61 n=0 ec=52/25 lis/c=52/52 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:47 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 09 16:05:47 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 09 16:05:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 09 16:05:47 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 09 16:05:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 09 16:05:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 09 16:05:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 09 16:05:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 09 16:05:48 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 09 16:05:48 compute-0 ceph-mon[75222]: osdmap e61: 3 total, 3 up, 3 in
Dec 09 16:05:48 compute-0 ceph-mon[75222]: pgmap v123: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:48 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 09 16:05:48 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 09 16:05:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 62 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=8.620635986s) [1] r=-1 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 active pruub 104.105857849s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 62 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=8.620603561s) [1] r=-1 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.105857849s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[6.a( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62) [1] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 62 pg[6.2( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=8.622941017s) [1] r=-1 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 active pruub 104.108879089s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 62 pg[6.2( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=8.622881889s) [1] r=-1 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.108879089s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 62 pg[6.e( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=8.623147964s) [1] r=-1 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 active pruub 104.109191895s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 62 pg[6.e( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=8.623128891s) [1] r=-1 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.109191895s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 62 pg[6.6( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=8.623060226s) [1] r=-1 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 active pruub 104.108741760s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 62 pg[6.6( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=8.622233391s) [1] r=-1 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 104.108741760s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[6.2( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62) [1] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[6.e( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62) [1] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[6.6( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62) [1] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.5( v 58'484 (0'0,58'484] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=58'484 lcod 46'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 62 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:48 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 09 16:05:48 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 09 16:05:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 09 16:05:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 09 16:05:49 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.476232529s) [0] async=[0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 active pruub 107.323532104s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.476177216s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.323532104s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.481518745s) [0] async=[0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 active pruub 107.328910828s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.481460571s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.328910828s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.474796295s) [0] async=[0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 active pruub 107.323402405s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.474768639s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.323402405s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.474773407s) [0] async=[0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 active pruub 107.323554993s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.474747658s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.323554993s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.474578857s) [0] async=[0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 active pruub 107.323493958s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63 pruub=15.474532127s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.323493958s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[6.2( v 40'39 (0'0,40'39] local-lis/les=62/63 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62) [1] r=0 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[6.6( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=62/63 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62) [1] r=0 lpr=62 pi=[52,62)/1 crt=40'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[6.e( v 40'39 lc 39'19 (0'0,40'39] local-lis/les=62/63 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62) [1] r=0 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:49 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 63 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=62/63 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=62) [1] r=0 lpr=62 pi=[52,62)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 63 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 09 16:05:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 09 16:05:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 09 16:05:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 09 16:05:49 compute-0 ceph-mon[75222]: osdmap e62: 3 total, 3 up, 3 in
Dec 09 16:05:49 compute-0 ceph-mon[75222]: 2.1a scrub starts
Dec 09 16:05:49 compute-0 ceph-mon[75222]: 2.1a scrub ok
Dec 09 16:05:49 compute-0 ceph-mon[75222]: osdmap e63: 3 total, 3 up, 3 in
Dec 09 16:05:49 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 4 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 101/249 objects misplaced (40.562%); 466 B/s, 2 keys/s, 3 objects/s recovering
Dec 09 16:05:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 09 16:05:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 09 16:05:50 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.5( v 62'486 (0'0,62'486] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=58'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.5( v 62'486 (0'0,62'486] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=58'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.470313072s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329345703s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.470252991s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329345703s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.5( v 62'486 (0'0,62'486] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.470113754s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=58'484 lcod 62'485 active pruub 107.329238892s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.5( v 62'486 (0'0,62'486] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.470004082s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=58'484 lcod 62'485 unknown NOTIFY pruub 107.329238892s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.470124245s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329559326s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469522476s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329055786s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469687462s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329231262s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.470056534s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329559326s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469610214s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329231262s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469356537s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329055786s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469779015s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329551697s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469381332s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329025269s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469706535s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329551697s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469110489s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329025269s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469182014s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329238892s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469544411s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329490662s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=61/62 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469102859s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329238892s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469316483s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329490662s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469283104s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.329490662s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.468520164s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 active pruub 107.328887939s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469064713s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.329490662s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.17( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 64 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=61/62 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.468288422s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 107.328887939s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.1b( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.1d( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 64 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 09 16:05:51 compute-0 ceph-mon[75222]: 4.1e scrub starts
Dec 09 16:05:51 compute-0 ceph-mon[75222]: 4.1e scrub ok
Dec 09 16:05:51 compute-0 ceph-mon[75222]: pgmap v126: 305 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 4 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 101/249 objects misplaced (40.562%); 466 B/s, 2 keys/s, 3 objects/s recovering
Dec 09 16:05:51 compute-0 ceph-mon[75222]: osdmap e64: 3 total, 3 up, 3 in
Dec 09 16:05:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 09 16:05:51 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.11( v 46'483 (0'0,46'483] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.13( v 46'483 (0'0,46'483] local-lis/les=64/65 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.5( v 62'486 (0'0,62'486] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=62'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.b( v 46'483 (0'0,46'483] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.7( v 46'483 (0'0,46'483] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.9( v 46'483 (0'0,46'483] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.f( v 46'483 (0'0,46'483] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.d( v 46'483 (0'0,46'483] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.1( v 46'483 (0'0,46'483] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.3( v 46'483 (0'0,46'483] local-lis/les=64/65 n=7 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 65 pg[9.19( v 46'483 (0'0,46'483] local-lis/les=64/65 n=6 ec=56/40 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:51 compute-0 ceph-mgr[75515]: [progress INFO root] Writing back 16 completed events
Dec 09 16:05:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 16:05:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:51 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 11 active+recovery_wait+remapped, 9 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80/249 objects misplaced (32.129%); 824 B/s, 2 keys/s, 10 objects/s recovering
Dec 09 16:05:52 compute-0 ceph-mon[75222]: osdmap e65: 3 total, 3 up, 3 in
Dec 09 16:05:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:05:52 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 09 16:05:52 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 09 16:05:53 compute-0 ceph-mon[75222]: pgmap v129: 305 pgs: 11 active+recovery_wait+remapped, 9 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80/249 objects misplaced (32.129%); 824 B/s, 2 keys/s, 10 objects/s recovering
Dec 09 16:05:53 compute-0 ceph-mon[75222]: 4.b scrub starts
Dec 09 16:05:53 compute-0 ceph-mon[75222]: 4.b scrub ok
Dec 09 16:05:53 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 11 active+recovery_wait+remapped, 9 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80/249 objects misplaced (32.129%); 423 B/s, 1 keys/s, 7 objects/s recovering
Dec 09 16:05:54 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 09 16:05:54 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 09 16:05:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:54 compute-0 ceph-mon[75222]: 11.16 scrub starts
Dec 09 16:05:54 compute-0 ceph-mon[75222]: 11.16 scrub ok
Dec 09 16:05:55 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 09 16:05:55 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 09 16:05:55 compute-0 ceph-mon[75222]: pgmap v130: 305 pgs: 11 active+recovery_wait+remapped, 9 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80/249 objects misplaced (32.129%); 423 B/s, 1 keys/s, 7 objects/s recovering
Dec 09 16:05:55 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 731 B/s, 16 objects/s recovering
Dec 09 16:05:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 09 16:05:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 09 16:05:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 09 16:05:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 09 16:05:56 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 09 16:05:56 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 09 16:05:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:05:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:05:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:05:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:05:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:05:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:05:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 09 16:05:56 compute-0 ceph-mon[75222]: 4.6 scrub starts
Dec 09 16:05:56 compute-0 ceph-mon[75222]: 4.6 scrub ok
Dec 09 16:05:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 09 16:05:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 09 16:05:56 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 09 16:05:56 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 09 16:05:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 09 16:05:56 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 09 16:05:56 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 66 pg[6.3( v 40'39 (0'0,40'39] local-lis/les=60/61 n=2 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.705773354s) [0] r=-1 lpr=66 pi=[60,66)/1 crt=40'39 active pruub 114.134979248s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:56 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 66 pg[6.3( v 40'39 (0'0,40'39] local-lis/les=60/61 n=2 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.705702782s) [0] r=-1 lpr=66 pi=[60,66)/1 crt=40'39 unknown NOTIFY pruub 114.134979248s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:56 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 66 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.705497742s) [0] r=-1 lpr=66 pi=[60,66)/1 crt=40'39 active pruub 114.135200500s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:56 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 66 pg[6.7( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.709368706s) [0] r=-1 lpr=66 pi=[60,66)/1 crt=40'39 active pruub 114.139549255s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:56 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 66 pg[6.7( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.709325790s) [0] r=-1 lpr=66 pi=[60,66)/1 crt=40'39 unknown NOTIFY pruub 114.139549255s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:56 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 66 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.705204964s) [0] r=-1 lpr=66 pi=[60,66)/1 crt=40'39 unknown NOTIFY pruub 114.135200500s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:56 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 66 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.709091187s) [0] r=-1 lpr=66 pi=[60,66)/1 crt=40'39 active pruub 114.139724731s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:56 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 66 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.709053040s) [0] r=-1 lpr=66 pi=[60,66)/1 crt=40'39 unknown NOTIFY pruub 114.139724731s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:56 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 66 pg[6.f( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66) [0] r=0 lpr=66 pi=[60,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:56 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 66 pg[6.3( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66) [0] r=0 lpr=66 pi=[60,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:56 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 66 pg[6.b( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66) [0] r=0 lpr=66 pi=[60,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:56 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 66 pg[6.7( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66) [0] r=0 lpr=66 pi=[60,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:57 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 09 16:05:57 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 09 16:05:57 compute-0 ceph-mon[75222]: pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 731 B/s, 16 objects/s recovering
Dec 09 16:05:57 compute-0 ceph-mon[75222]: 4.19 scrub starts
Dec 09 16:05:57 compute-0 ceph-mon[75222]: 4.19 scrub ok
Dec 09 16:05:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 09 16:05:57 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 09 16:05:57 compute-0 ceph-mon[75222]: osdmap e66: 3 total, 3 up, 3 in
Dec 09 16:05:57 compute-0 ceph-mon[75222]: 7.19 scrub starts
Dec 09 16:05:57 compute-0 ceph-mon[75222]: 7.19 scrub ok
Dec 09 16:05:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 09 16:05:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 09 16:05:57 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 09 16:05:57 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 67 pg[6.7( v 40'39 lc 39'21 (0'0,40'39] local-lis/les=66/67 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66) [0] r=0 lpr=66 pi=[60,66)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:57 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 67 pg[6.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=66/67 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66) [0] r=0 lpr=66 pi=[60,66)/1 crt=40'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:57 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 67 pg[6.f( v 40'39 lc 39'1 (0'0,40'39] local-lis/les=66/67 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66) [0] r=0 lpr=66 pi=[60,66)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:57 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 67 pg[6.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=66/67 n=2 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=66) [0] r=0 lpr=66 pi=[60,66)/1 crt=40'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:57 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 515 B/s, 12 objects/s recovering
Dec 09 16:05:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 09 16:05:57 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 09 16:05:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 09 16:05:57 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 09 16:05:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 09 16:05:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 09 16:05:58 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 09 16:05:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 09 16:05:58 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 09 16:05:58 compute-0 ceph-mon[75222]: osdmap e67: 3 total, 3 up, 3 in
Dec 09 16:05:58 compute-0 ceph-mon[75222]: pgmap v134: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 515 B/s, 12 objects/s recovering
Dec 09 16:05:58 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 09 16:05:58 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 09 16:05:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:05:59 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 09 16:05:59 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 09 16:05:59 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 68 pg[6.4( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=68 pruub=13.966251373s) [1] r=-1 lpr=68 pi=[52,68)/1 crt=40'39 lcod 0'0 active pruub 120.109169006s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:59 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 68 pg[6.4( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=68 pruub=13.966206551s) [1] r=-1 lpr=68 pi=[52,68)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 120.109169006s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:59 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 68 pg[6.c( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=68 pruub=13.965433121s) [1] r=-1 lpr=68 pi=[52,68)/1 crt=40'39 lcod 0'0 active pruub 120.109214783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:05:59 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 68 pg[6.c( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=68 pruub=13.965373039s) [1] r=-1 lpr=68 pi=[52,68)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 120.109214783s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:05:59 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 68 pg[6.4( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=68) [1] r=0 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:59 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 68 pg[6.c( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=68) [1] r=0 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:05:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 09 16:05:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 09 16:05:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 09 16:05:59 compute-0 ceph-mon[75222]: osdmap e68: 3 total, 3 up, 3 in
Dec 09 16:05:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 09 16:05:59 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 09 16:05:59 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 69 pg[6.4( v 40'39 lc 39'15 (0'0,40'39] local-lis/les=68/69 n=2 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=68) [1] r=0 lpr=68 pi=[52,68)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:59 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 69 pg[6.c( v 40'39 lc 39'17 (0'0,40'39] local-lis/les=68/69 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=68) [1] r=0 lpr=68 pi=[52,68)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:05:59 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:05:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 09 16:05:59 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 09 16:05:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 09 16:05:59 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 09 16:06:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 09 16:06:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 09 16:06:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 09 16:06:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 09 16:06:00 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 09 16:06:00 compute-0 ceph-mon[75222]: 4.3 scrub starts
Dec 09 16:06:00 compute-0 ceph-mon[75222]: 4.3 scrub ok
Dec 09 16:06:00 compute-0 ceph-mon[75222]: osdmap e69: 3 total, 3 up, 3 in
Dec 09 16:06:00 compute-0 ceph-mon[75222]: pgmap v137: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 09 16:06:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 09 16:06:00 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 09 16:06:00 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 09 16:06:00 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 70 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=70 pruub=10.770060539s) [0] r=-1 lpr=70 pi=[60,70)/1 crt=40'39 active pruub 114.135055542s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:00 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 70 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=70 pruub=10.770000458s) [0] r=-1 lpr=70 pi=[60,70)/1 crt=40'39 unknown NOTIFY pruub 114.135055542s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:00 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 70 pg[6.5( v 40'39 (0'0,40'39] local-lis/les=60/61 n=2 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=70 pruub=10.769945145s) [0] r=-1 lpr=70 pi=[60,70)/1 crt=40'39 active pruub 114.135231018s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:00 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 70 pg[6.5( v 40'39 (0'0,40'39] local-lis/les=60/61 n=2 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=70 pruub=10.769908905s) [0] r=-1 lpr=70 pi=[60,70)/1 crt=40'39 unknown NOTIFY pruub 114.135231018s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:00 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 70 pg[6.d( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:00 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 70 pg[6.5( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:01 compute-0 sudo[98179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaikeuoczinketlomdybqdpmxjewpvja ; /usr/bin/python3'
Dec 09 16:06:01 compute-0 sudo[98179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:06:01 compute-0 python3[98181]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:06:01 compute-0 podman[98182]: 2025-12-09 16:06:01.542381007 +0000 UTC m=+0.041092115 container create e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6 (image=quay.io/ceph/ceph:v20, name=musing_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:06:01 compute-0 systemd[1]: Started libpod-conmon-e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6.scope.
Dec 09 16:06:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:06:01 compute-0 podman[98182]: 2025-12-09 16:06:01.524438163 +0000 UTC m=+0.023149251 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cba319297a13a2bee702f7aae7c08ce269b4ba40d5e8d271a67227f25ee991/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cba319297a13a2bee702f7aae7c08ce269b4ba40d5e8d271a67227f25ee991/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 09 16:06:01 compute-0 podman[98182]: 2025-12-09 16:06:01.636501799 +0000 UTC m=+0.135212927 container init e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6 (image=quay.io/ceph/ceph:v20, name=musing_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:06:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 09 16:06:01 compute-0 podman[98182]: 2025-12-09 16:06:01.645399409 +0000 UTC m=+0.144110497 container start e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6 (image=quay.io/ceph/ceph:v20, name=musing_fermat, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:06:01 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 09 16:06:01 compute-0 podman[98182]: 2025-12-09 16:06:01.650848402 +0000 UTC m=+0.149559540 container attach e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6 (image=quay.io/ceph/ceph:v20, name=musing_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:06:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 09 16:06:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 09 16:06:01 compute-0 ceph-mon[75222]: osdmap e70: 3 total, 3 up, 3 in
Dec 09 16:06:01 compute-0 ceph-mon[75222]: 5.1f scrub starts
Dec 09 16:06:01 compute-0 ceph-mon[75222]: 5.1f scrub ok
Dec 09 16:06:01 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 71 pg[6.5( v 40'39 lc 39'11 (0'0,40'39] local-lis/les=70/71 n=2 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:01 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 71 pg[6.d( v 40'39 lc 39'13 (0'0,40'39] local-lis/les=70/71 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:01 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 09 16:06:01 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 09 16:06:01 compute-0 musing_fermat[98197]: could not fetch user info: no user info saved
Dec 09 16:06:01 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Dec 09 16:06:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 09 16:06:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 09 16:06:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 09 16:06:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 09 16:06:01 compute-0 systemd[1]: libpod-e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6.scope: Deactivated successfully.
Dec 09 16:06:01 compute-0 podman[98182]: 2025-12-09 16:06:01.904628466 +0000 UTC m=+0.403339564 container died e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6 (image=quay.io/ceph/ceph:v20, name=musing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-14cba319297a13a2bee702f7aae7c08ce269b4ba40d5e8d271a67227f25ee991-merged.mount: Deactivated successfully.
Dec 09 16:06:01 compute-0 podman[98182]: 2025-12-09 16:06:01.947755266 +0000 UTC m=+0.446466354 container remove e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6 (image=quay.io/ceph/ceph:v20, name=musing_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:06:01 compute-0 systemd[1]: libpod-conmon-e6ce2e3650a63f7aff83f1c6655d632841694bab380d15067fb7cce852019ca6.scope: Deactivated successfully.
Dec 09 16:06:01 compute-0 sudo[98179]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:02 compute-0 sudo[98318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmjwlawcgrpfkxozzysojnuhietcaqwz ; /usr/bin/python3'
Dec 09 16:06:02 compute-0 sudo[98318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:06:02 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 09 16:06:02 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 09 16:06:02 compute-0 python3[98320]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:06:02 compute-0 podman[98321]: 2025-12-09 16:06:02.392619095 +0000 UTC m=+0.068905936 container create b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66 (image=quay.io/ceph/ceph:v20, name=sleepy_noyce, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:06:02 compute-0 systemd[1]: Started libpod-conmon-b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66.scope.
Dec 09 16:06:02 compute-0 podman[98321]: 2025-12-09 16:06:02.363163458 +0000 UTC m=+0.039450379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 09 16:06:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03703f06e1249944837dd196b1a6720053af064bfd640554e18c5ecb74510b7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03703f06e1249944837dd196b1a6720053af064bfd640554e18c5ecb74510b7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:02 compute-0 podman[98321]: 2025-12-09 16:06:02.480606225 +0000 UTC m=+0.156893086 container init b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66 (image=quay.io/ceph/ceph:v20, name=sleepy_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:06:02 compute-0 podman[98321]: 2025-12-09 16:06:02.48651465 +0000 UTC m=+0.162801501 container start b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66 (image=quay.io/ceph/ceph:v20, name=sleepy_noyce, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:06:02 compute-0 podman[98321]: 2025-12-09 16:06:02.489899485 +0000 UTC m=+0.166186376 container attach b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66 (image=quay.io/ceph/ceph:v20, name=sleepy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:06:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 09 16:06:02 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 09 16:06:02 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 09 16:06:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 09 16:06:02 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 09 16:06:02 compute-0 ceph-mon[75222]: osdmap e71: 3 total, 3 up, 3 in
Dec 09 16:06:02 compute-0 ceph-mon[75222]: 10.1f scrub starts
Dec 09 16:06:02 compute-0 ceph-mon[75222]: 10.1f scrub ok
Dec 09 16:06:02 compute-0 ceph-mon[75222]: pgmap v140: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Dec 09 16:06:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 09 16:06:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 09 16:06:02 compute-0 ceph-mon[75222]: 8.16 scrub starts
Dec 09 16:06:02 compute-0 ceph-mon[75222]: 8.16 scrub ok
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]: {
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "user_id": "openstack",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "display_name": "openstack",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "email": "",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "suspended": 0,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "max_buckets": 1000,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "subusers": [],
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "keys": [
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         {
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:             "user": "openstack",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:             "access_key": "VZPLWTYGO22F54USHPUH",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:             "secret_key": "uKEBdBy58toDsDWVdgD7KZ2zABjytfBieDlj4JT3",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:             "active": true,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:             "create_date": "2025-12-09T16:06:02.676414Z"
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         }
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     ],
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "swift_keys": [],
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "caps": [],
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "op_mask": "read, write, delete",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "default_placement": "",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "default_storage_class": "",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "placement_tags": [],
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "bucket_quota": {
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "enabled": false,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "check_on_raw": false,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "max_size": -1,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "max_size_kb": 0,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "max_objects": -1
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     },
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "user_quota": {
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "enabled": false,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "check_on_raw": false,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "max_size": -1,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "max_size_kb": 0,
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:         "max_objects": -1
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     },
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "temp_url_keys": [],
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "type": "rgw",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "mfa_ids": [],
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "account_id": "",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "path": "/",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "create_date": "2025-12-09T16:06:02.676038Z",
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "tags": [],
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]:     "group_ids": []
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]: }
Dec 09 16:06:02 compute-0 sleepy_noyce[98336]: 
Dec 09 16:06:02 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 72 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.737289429s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=46'483 lcod 0'0 active pruub 119.963844299s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:02 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 72 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.737234116s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 119.963844299s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:02 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:02 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 72 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.740386963s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=71'488 lcod 71'488 active pruub 119.967506409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:02 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 72 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.740343094s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=71'488 lcod 71'488 unknown NOTIFY pruub 119.967506409s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:02 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 72 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.740427017s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=46'483 lcod 0'0 active pruub 119.967864990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:02 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 72 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.740172386s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 119.967864990s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:02 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 72 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.740485191s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=71'484 lcod 71'484 active pruub 119.968482971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:02 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 72 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.740448952s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=71'484 lcod 71'484 unknown NOTIFY pruub 119.968482971s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:02 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:02 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:02 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:02 compute-0 systemd[1]: libpod-b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66.scope: Deactivated successfully.
Dec 09 16:06:02 compute-0 podman[98321]: 2025-12-09 16:06:02.716094044 +0000 UTC m=+0.392380915 container died b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66 (image=quay.io/ceph/ceph:v20, name=sleepy_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b03703f06e1249944837dd196b1a6720053af064bfd640554e18c5ecb74510b7-merged.mount: Deactivated successfully.
Dec 09 16:06:02 compute-0 podman[98321]: 2025-12-09 16:06:02.772093196 +0000 UTC m=+0.448380037 container remove b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66 (image=quay.io/ceph/ceph:v20, name=sleepy_noyce, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:06:02 compute-0 systemd[1]: libpod-conmon-b468f220da7f5af3a7269e5bc0c2b4343a723e4c4b5fc579b046ad3c8957ae66.scope: Deactivated successfully.
Dec 09 16:06:02 compute-0 sudo[98318]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 09 16:06:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 09 16:06:03 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 09 16:06:03 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[56,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:03 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[56,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:03 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[56,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:03 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[56,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:03 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[56,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:03 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[56,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:03 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 09 16:06:03 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 09 16:06:03 compute-0 ceph-mon[75222]: osdmap e72: 3 total, 3 up, 3 in
Dec 09 16:06:03 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[56,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:03 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[56,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:03 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 73 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=0 lpr=73 pi=[56,73)/1 crt=71'488 lcod 71'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:03 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 73 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=0 lpr=73 pi=[56,73)/1 crt=71'488 lcod 71'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:03 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 73 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=0 lpr=73 pi=[56,73)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:03 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 73 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=0 lpr=73 pi=[56,73)/1 crt=71'484 lcod 71'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:03 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 73 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=0 lpr=73 pi=[56,73)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:03 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 73 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=0 lpr=73 pi=[56,73)/1 crt=71'484 lcod 71'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:03 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 73 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=0 lpr=73 pi=[56,73)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:03 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 73 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] r=0 lpr=73 pi=[56,73)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:03 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Dec 09 16:06:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 09 16:06:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 09 16:06:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 09 16:06:04 compute-0 ceph-mon[75222]: osdmap e73: 3 total, 3 up, 3 in
Dec 09 16:06:04 compute-0 ceph-mon[75222]: pgmap v143: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Dec 09 16:06:04 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 74 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=73/74 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[56,73)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:04 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 74 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=73/74 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[56,73)/1 crt=71'485 lcod 71'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:04 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 74 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=73/74 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[56,73)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:04 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 74 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=73/74 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[56,73)/1 crt=71'489 lcod 71'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:05 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 09 16:06:05 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 09 16:06:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 09 16:06:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 09 16:06:05 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 09 16:06:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 75 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=0/0 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 pct=0'0 crt=71'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 75 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=0/0 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 crt=71'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 75 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 75 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 75 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 75 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 75 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=0/0 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 pct=0'0 crt=71'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 75 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=0/0 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 crt=71'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:05 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 75 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=73/74 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75 pruub=14.990313530s) [2] async=[2] r=-1 lpr=75 pi=[56,75)/1 crt=71'489 lcod 71'488 active pruub 123.229156494s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:05 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 75 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=73/74 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75 pruub=14.990078926s) [2] r=-1 lpr=75 pi=[56,75)/1 crt=71'489 lcod 71'488 unknown NOTIFY pruub 123.229156494s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:05 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 75 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=73/74 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75 pruub=14.987836838s) [2] async=[2] r=-1 lpr=75 pi=[56,75)/1 crt=46'483 lcod 0'0 active pruub 123.227218628s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:05 compute-0 ceph-mon[75222]: osdmap e74: 3 total, 3 up, 3 in
Dec 09 16:06:05 compute-0 ceph-mon[75222]: 3.1c scrub starts
Dec 09 16:06:05 compute-0 ceph-mon[75222]: 3.1c scrub ok
Dec 09 16:06:05 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 75 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=73/74 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75 pruub=14.987620354s) [2] r=-1 lpr=75 pi=[56,75)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 123.227218628s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:05 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 75 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=73/74 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75 pruub=14.988758087s) [2] async=[2] r=-1 lpr=75 pi=[56,75)/1 crt=46'483 lcod 0'0 active pruub 123.229133606s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:05 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 75 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=73/74 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75 pruub=14.988684654s) [2] r=-1 lpr=75 pi=[56,75)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 123.229133606s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:05 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 75 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=73/74 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75 pruub=14.988007545s) [2] async=[2] r=-1 lpr=75 pi=[56,75)/1 crt=71'485 lcod 71'484 active pruub 123.229080200s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:05 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 75 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=73/74 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75 pruub=14.987929344s) [2] r=-1 lpr=75 pi=[56,75)/1 crt=71'485 lcod 71'484 unknown NOTIFY pruub 123.229080200s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:05 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 33 op/s; 445 B/s, 2 objects/s recovering
Dec 09 16:06:06 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 09 16:06:06 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 09 16:06:06 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 09 16:06:06 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 09 16:06:06 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 09 16:06:06 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 09 16:06:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 09 16:06:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 09 16:06:06 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 09 16:06:06 compute-0 ceph-mon[75222]: osdmap e75: 3 total, 3 up, 3 in
Dec 09 16:06:06 compute-0 ceph-mon[75222]: pgmap v146: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 33 op/s; 445 B/s, 2 objects/s recovering
Dec 09 16:06:06 compute-0 ceph-mon[75222]: 8.17 scrub starts
Dec 09 16:06:06 compute-0 ceph-mon[75222]: 8.17 scrub ok
Dec 09 16:06:06 compute-0 ceph-mon[75222]: osdmap e76: 3 total, 3 up, 3 in
Dec 09 16:06:06 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 76 pg[9.e( v 71'489 (0'0,71'489] local-lis/les=75/76 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 crt=71'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:06 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 76 pg[9.6( v 46'483 (0'0,46'483] local-lis/les=75/76 n=7 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:06 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 76 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=75/76 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 crt=71'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:06 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 76 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=75/76 n=6 ec=56/40 lis/c=73/56 les/c/f=74/57/0 sis=75) [2] r=0 lpr=75 pi=[56,75)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:07 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 09 16:06:07 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 09 16:06:07 compute-0 ceph-mon[75222]: 4.0 scrub starts
Dec 09 16:06:07 compute-0 ceph-mon[75222]: 4.0 scrub ok
Dec 09 16:06:07 compute-0 ceph-mon[75222]: 5.10 scrub starts
Dec 09 16:06:07 compute-0 ceph-mon[75222]: 5.10 scrub ok
Dec 09 16:06:07 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 57 op/s; 422 B/s, 2 objects/s recovering
Dec 09 16:06:08 compute-0 ceph-mon[75222]: 4.c scrub starts
Dec 09 16:06:08 compute-0 ceph-mon[75222]: 4.c scrub ok
Dec 09 16:06:08 compute-0 ceph-mon[75222]: pgmap v148: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 57 op/s; 422 B/s, 2 objects/s recovering
Dec 09 16:06:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:09 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.3 KiB/s wr, 74 op/s; 435 B/s, 5 objects/s recovering
Dec 09 16:06:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 09 16:06:09 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 09 16:06:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 09 16:06:09 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 09 16:06:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 09 16:06:09 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 09 16:06:09 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 09 16:06:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 09 16:06:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 09 16:06:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 09 16:06:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 77 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=64/65 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=77 pruub=13.377962112s) [2] r=-1 lpr=77 pi=[64,77)/1 crt=71'486 lcod 71'486 active pruub 130.035659790s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 77 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=64/65 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=77 pruub=13.377853394s) [2] r=-1 lpr=77 pi=[64,77)/1 crt=71'486 lcod 71'486 unknown NOTIFY pruub 130.035659790s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 77 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=77 pruub=12.367231369s) [2] r=-1 lpr=77 pi=[63,77)/1 crt=71'484 lcod 71'484 active pruub 129.025604248s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 77 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=77 pruub=12.367178917s) [2] r=-1 lpr=77 pi=[63,77)/1 crt=71'484 lcod 71'484 unknown NOTIFY pruub 129.025604248s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 77 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=64/65 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=77 pruub=13.377143860s) [2] r=-1 lpr=77 pi=[64,77)/1 crt=71'484 lcod 71'484 active pruub 130.035842896s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 77 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=64/65 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=77 pruub=13.377074242s) [2] r=-1 lpr=77 pi=[64,77)/1 crt=71'484 lcod 71'484 unknown NOTIFY pruub 130.035842896s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 77 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=77 pruub=12.366599083s) [2] r=-1 lpr=77 pi=[63,77)/1 crt=46'483 active pruub 129.025650024s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 77 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=77 pruub=12.366574287s) [2] r=-1 lpr=77 pi=[63,77)/1 crt=46'483 unknown NOTIFY pruub 129.025650024s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 09 16:06:09 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 77 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=77) [2] r=0 lpr=77 pi=[64,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:09 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 77 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=77) [2] r=0 lpr=77 pi=[63,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:09 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 77 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=77) [2] r=0 lpr=77 pi=[64,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:09 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 77 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=77) [2] r=0 lpr=77 pi=[63,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:10 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 09 16:06:10 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 09 16:06:10 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 09 16:06:10 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 09 16:06:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 09 16:06:10 compute-0 ceph-mon[75222]: pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.3 KiB/s wr, 74 op/s; 435 B/s, 5 objects/s recovering
Dec 09 16:06:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 09 16:06:10 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 09 16:06:10 compute-0 ceph-mon[75222]: osdmap e77: 3 total, 3 up, 3 in
Dec 09 16:06:10 compute-0 ceph-mon[75222]: 11.13 scrub starts
Dec 09 16:06:10 compute-0 ceph-mon[75222]: 11.13 scrub ok
Dec 09 16:06:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 09 16:06:10 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 09 16:06:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 78 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[63,78)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 78 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[63,78)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[64,78)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[64,78)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 78 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[64,78)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 78 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[64,78)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[63,78)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[63,78)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:10 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 78 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=64/65 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] r=0 lpr=78 pi=[64,78)/1 crt=71'484 lcod 71'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:10 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 78 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] r=0 lpr=78 pi=[63,78)/1 crt=71'484 lcod 71'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:10 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 78 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] r=0 lpr=78 pi=[63,78)/1 crt=71'484 lcod 71'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:10 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 78 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=64/65 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] r=0 lpr=78 pi=[64,78)/1 crt=71'486 lcod 71'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:10 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 78 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=64/65 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] r=0 lpr=78 pi=[64,78)/1 crt=71'486 lcod 71'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:10 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 78 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] r=0 lpr=78 pi=[63,78)/1 crt=46'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:10 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 78 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] r=0 lpr=78 pi=[63,78)/1 crt=46'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:10 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 78 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=64/65 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] r=0 lpr=78 pi=[64,78)/1 crt=71'484 lcod 71'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:11 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 09 16:06:11 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 09 16:06:11 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 09 16:06:11 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 09 16:06:11 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 34 op/s; 138 B/s, 3 objects/s recovering
Dec 09 16:06:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 09 16:06:11 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 09 16:06:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 09 16:06:11 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 09 16:06:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 09 16:06:11 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 09 16:06:11 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 09 16:06:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 09 16:06:11 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 09 16:06:11 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 79 pg[6.8( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=79 pruub=9.420779228s) [2] r=-1 lpr=79 pi=[52,79)/1 crt=40'39 lcod 0'0 active pruub 128.106475830s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:11 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 79 pg[6.8( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=79 pruub=9.420727730s) [2] r=-1 lpr=79 pi=[52,79)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 128.106475830s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:11 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 79 pg[6.8( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=79) [2] r=0 lpr=79 pi=[52,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:11 compute-0 ceph-mon[75222]: 4.15 scrub starts
Dec 09 16:06:11 compute-0 ceph-mon[75222]: 4.15 scrub ok
Dec 09 16:06:11 compute-0 ceph-mon[75222]: osdmap e78: 3 total, 3 up, 3 in
Dec 09 16:06:11 compute-0 ceph-mon[75222]: 7.1e scrub starts
Dec 09 16:06:11 compute-0 ceph-mon[75222]: 7.1e scrub ok
Dec 09 16:06:11 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 09 16:06:11 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 79 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=78/79 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[63,78)/1 crt=71'485 lcod 71'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 79 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=78/79 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[63,78)/1 crt=46'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 79 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=78/79 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[64,78)/1 crt=71'487 lcod 71'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 79 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=78/79 n=7 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[64,78)/1 crt=71'485 lcod 71'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:12 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 09 16:06:12 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 09 16:06:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 09 16:06:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 09 16:06:12 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 80 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=78/79 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80 pruub=15.008703232s) [2] async=[2] r=-1 lpr=80 pi=[64,80)/1 crt=71'487 lcod 71'486 active pruub 134.693878174s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 80 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=78/79 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80 pruub=15.006109238s) [2] async=[2] r=-1 lpr=80 pi=[63,80)/1 crt=71'485 lcod 71'484 active pruub 134.691390991s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 80 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=78/79 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80 pruub=15.006019592s) [2] r=-1 lpr=80 pi=[63,80)/1 crt=71'485 lcod 71'484 unknown NOTIFY pruub 134.691390991s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 80 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=78/79 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80 pruub=15.008369446s) [2] r=-1 lpr=80 pi=[64,80)/1 crt=71'487 lcod 71'486 unknown NOTIFY pruub 134.693878174s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 80 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=78/79 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80 pruub=15.008020401s) [2] async=[2] r=-1 lpr=80 pi=[63,80)/1 crt=46'483 active pruub 134.693817139s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 80 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=78/79 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80 pruub=15.008124352s) [2] async=[2] r=-1 lpr=80 pi=[64,80)/1 crt=71'485 lcod 71'484 active pruub 134.693923950s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 80 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=78/79 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80 pruub=15.007989883s) [2] r=-1 lpr=80 pi=[63,80)/1 crt=46'483 unknown NOTIFY pruub 134.693817139s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=0/0 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80) [2] r=0 lpr=80 pi=[63,80)/1 pct=0'0 crt=71'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=0/0 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80) [2] r=0 lpr=80 pi=[63,80)/1 crt=71'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 80 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=78/79 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80 pruub=15.007668495s) [2] r=-1 lpr=80 pi=[64,80)/1 crt=71'485 lcod 71'484 unknown NOTIFY pruub 134.693923950s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80) [2] r=0 lpr=80 pi=[63,80)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80) [2] r=0 lpr=80 pi=[63,80)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=0/0 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80) [2] r=0 lpr=80 pi=[64,80)/1 pct=0'0 crt=71'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=0/0 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80) [2] r=0 lpr=80 pi=[64,80)/1 crt=71'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=0/0 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80) [2] r=0 lpr=80 pi=[64,80)/1 pct=0'0 crt=71'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:12 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=0/0 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80) [2] r=0 lpr=80 pi=[64,80)/1 crt=71'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:13 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[6.8( v 40'39 (0'0,40'39] local-lis/les=79/80 n=1 ec=52/27 lis/c=52/52 les/c/f=53/53/0 sis=79) [2] r=0 lpr=79 pi=[52,79)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:13 compute-0 ceph-mon[75222]: 4.16 scrub starts
Dec 09 16:06:13 compute-0 ceph-mon[75222]: 4.16 scrub ok
Dec 09 16:06:13 compute-0 ceph-mon[75222]: pgmap v152: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 34 op/s; 138 B/s, 3 objects/s recovering
Dec 09 16:06:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 09 16:06:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 09 16:06:13 compute-0 ceph-mon[75222]: osdmap e79: 3 total, 3 up, 3 in
Dec 09 16:06:13 compute-0 ceph-mon[75222]: osdmap e80: 3 total, 3 up, 3 in
Dec 09 16:06:13 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 09 16:06:13 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 09 16:06:13 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 79 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=11.991297722s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=46'483 lcod 0'0 active pruub 127.968696594s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:13 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 79 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=11.990962029s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=71'486 lcod 71'486 active pruub 127.968605042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:13 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 80 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=11.990904808s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=71'486 lcod 71'486 unknown NOTIFY pruub 127.968605042s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:13 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 80 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=11.991043091s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 127.968696594s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:13 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=79) [2] r=0 lpr=80 pi=[56,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:13 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=79) [2] r=0 lpr=80 pi=[56,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:13 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 09 16:06:13 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 09 16:06:13 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 290 B/s, 5 objects/s recovering
Dec 09 16:06:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 09 16:06:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 09 16:06:13 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 09 16:06:14 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[56,81)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:14 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[56,81)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:14 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[56,81)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:14 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[56,81)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:14 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 81 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] r=0 lpr=81 pi=[56,81)/2 crt=71'486 lcod 71'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:14 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 81 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] r=0 lpr=81 pi=[56,81)/2 crt=71'486 lcod 71'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:14 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 81 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] r=0 lpr=81 pi=[56,81)/2 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:14 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 81 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] r=0 lpr=81 pi=[56,81)/2 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:14 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 81 pg[9.f( v 71'485 (0'0,71'485] local-lis/les=80/81 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80) [2] r=0 lpr=80 pi=[64,80)/1 crt=71'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:14 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 81 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=80/81 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80) [2] r=0 lpr=80 pi=[63,80)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:14 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 81 pg[9.17( v 71'485 (0'0,71'485] local-lis/les=80/81 n=6 ec=56/40 lis/c=78/63 les/c/f=79/64/0 sis=80) [2] r=0 lpr=80 pi=[63,80)/1 crt=71'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:14 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 81 pg[9.7( v 71'487 (0'0,71'487] local-lis/les=80/81 n=7 ec=56/40 lis/c=78/64 les/c/f=79/65/0 sis=80) [2] r=0 lpr=80 pi=[64,80)/1 crt=71'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:14 compute-0 ceph-mon[75222]: 4.17 scrub starts
Dec 09 16:06:14 compute-0 ceph-mon[75222]: 4.17 scrub ok
Dec 09 16:06:14 compute-0 ceph-mon[75222]: 7.1d scrub starts
Dec 09 16:06:14 compute-0 ceph-mon[75222]: 7.1d scrub ok
Dec 09 16:06:14 compute-0 ceph-mon[75222]: osdmap e81: 3 total, 3 up, 3 in
Dec 09 16:06:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:14 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 09 16:06:14 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 09 16:06:14 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 09 16:06:14 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 09 16:06:14 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 09 16:06:14 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 09 16:06:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 09 16:06:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 09 16:06:15 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 09 16:06:15 compute-0 ceph-mon[75222]: 10.1d scrub starts
Dec 09 16:06:15 compute-0 ceph-mon[75222]: 10.1d scrub ok
Dec 09 16:06:15 compute-0 ceph-mon[75222]: pgmap v155: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 290 B/s, 5 objects/s recovering
Dec 09 16:06:15 compute-0 ceph-mon[75222]: 8.13 scrub starts
Dec 09 16:06:15 compute-0 ceph-mon[75222]: 8.13 scrub ok
Dec 09 16:06:15 compute-0 ceph-mon[75222]: osdmap e82: 3 total, 3 up, 3 in
Dec 09 16:06:15 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 09 16:06:15 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 09 16:06:15 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 09 16:06:15 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 82 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=81/82 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[56,81)/2 crt=71'487 lcod 71'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:15 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 82 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=81/82 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[56,81)/2 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:15 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 09 16:06:15 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 290 B/s, 5 objects/s recovering
Dec 09 16:06:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 09 16:06:16 compute-0 ceph-mon[75222]: 7.1b scrub starts
Dec 09 16:06:16 compute-0 ceph-mon[75222]: 7.1b scrub ok
Dec 09 16:06:16 compute-0 ceph-mon[75222]: 10.1c scrub starts
Dec 09 16:06:16 compute-0 ceph-mon[75222]: 10.1c scrub ok
Dec 09 16:06:16 compute-0 ceph-mon[75222]: 7.7 scrub starts
Dec 09 16:06:16 compute-0 ceph-mon[75222]: 7.7 scrub ok
Dec 09 16:06:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 09 16:06:16 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 09 16:06:16 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 83 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=81/82 n=7 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83 pruub=15.645263672s) [2] async=[2] r=-1 lpr=83 pi=[56,83)/2 crt=46'483 lcod 0'0 active pruub 134.224792480s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:16 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 83 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=81/82 n=7 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83 pruub=15.645168304s) [2] r=-1 lpr=83 pi=[56,83)/2 crt=46'483 lcod 0'0 unknown NOTIFY pruub 134.224792480s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:16 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 83 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=81/82 n=6 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83 pruub=15.642454147s) [2] async=[2] r=-1 lpr=83 pi=[56,83)/2 crt=71'487 lcod 71'486 active pruub 134.222396851s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:16 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 83 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=81/82 n=6 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83 pruub=15.642400742s) [2] r=-1 lpr=83 pi=[56,83)/2 crt=71'487 lcod 71'486 unknown NOTIFY pruub 134.222396851s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:16 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 83 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/2 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:16 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 83 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/2 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:16 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 83 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=0/0 n=6 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/2 pct=0'0 crt=71'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:16 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 83 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=0/0 n=6 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/2 crt=71'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:16 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 09 16:06:16 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 09 16:06:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 09 16:06:17 compute-0 ceph-mon[75222]: 2.14 scrub starts
Dec 09 16:06:17 compute-0 ceph-mon[75222]: 2.14 scrub ok
Dec 09 16:06:17 compute-0 ceph-mon[75222]: pgmap v158: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 290 B/s, 5 objects/s recovering
Dec 09 16:06:17 compute-0 ceph-mon[75222]: osdmap e83: 3 total, 3 up, 3 in
Dec 09 16:06:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 09 16:06:17 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 09 16:06:17 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 84 pg[9.8( v 46'483 (0'0,46'483] local-lis/les=83/84 n=7 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/2 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:17 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 84 pg[9.18( v 71'487 (0'0,71'487] local-lis/les=83/84 n=6 ec=56/40 lis/c=81/56 les/c/f=82/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/2 crt=71'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:17 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:18 compute-0 ceph-mon[75222]: 10.1b scrub starts
Dec 09 16:06:18 compute-0 ceph-mon[75222]: 10.1b scrub ok
Dec 09 16:06:18 compute-0 ceph-mon[75222]: osdmap e84: 3 total, 3 up, 3 in
Dec 09 16:06:18 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 09 16:06:18 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 09 16:06:18 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec 09 16:06:18 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec 09 16:06:18 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 09 16:06:18 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 09 16:06:19 compute-0 ceph-mon[75222]: pgmap v161: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:19 compute-0 ceph-mon[75222]: 8.a scrub starts
Dec 09 16:06:19 compute-0 ceph-mon[75222]: 8.a scrub ok
Dec 09 16:06:19 compute-0 ceph-mon[75222]: 11.17 scrub starts
Dec 09 16:06:19 compute-0 ceph-mon[75222]: 11.17 scrub ok
Dec 09 16:06:19 compute-0 ceph-mon[75222]: 2.12 scrub starts
Dec 09 16:06:19 compute-0 ceph-mon[75222]: 2.12 scrub ok
Dec 09 16:06:19 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 09 16:06:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:19 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 09 16:06:19 compute-0 sshd-session[98436]: Accepted publickey for zuul from 192.168.122.30 port 36464 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:06:19 compute-0 systemd-logind[786]: New session 34 of user zuul.
Dec 09 16:06:19 compute-0 systemd[1]: Started Session 34 of User zuul.
Dec 09 16:06:19 compute-0 sshd-session[98436]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:06:19 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 09 16:06:19 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 09 16:06:19 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 89 B/s, 2 objects/s recovering
Dec 09 16:06:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 09 16:06:19 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 09 16:06:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 09 16:06:19 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 09 16:06:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 09 16:06:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 09 16:06:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 09 16:06:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 09 16:06:20 compute-0 ceph-mon[75222]: 8.14 scrub starts
Dec 09 16:06:20 compute-0 ceph-mon[75222]: 8.14 scrub ok
Dec 09 16:06:20 compute-0 ceph-mon[75222]: 10.18 scrub starts
Dec 09 16:06:20 compute-0 ceph-mon[75222]: 10.18 scrub ok
Dec 09 16:06:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 09 16:06:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 09 16:06:20 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 09 16:06:20 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 09 16:06:20 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 09 16:06:20 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 09 16:06:20 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 09 16:06:20 compute-0 python3.9[98589]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:06:21 compute-0 sudo[98621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:06:21 compute-0 sudo[98621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:21 compute-0 sudo[98621]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:21 compute-0 ceph-mon[75222]: pgmap v162: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 89 B/s, 2 objects/s recovering
Dec 09 16:06:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 09 16:06:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 09 16:06:21 compute-0 ceph-mon[75222]: osdmap e85: 3 total, 3 up, 3 in
Dec 09 16:06:21 compute-0 ceph-mon[75222]: 7.18 scrub starts
Dec 09 16:06:21 compute-0 ceph-mon[75222]: 7.18 scrub ok
Dec 09 16:06:21 compute-0 ceph-mon[75222]: 8.8 scrub starts
Dec 09 16:06:21 compute-0 ceph-mon[75222]: 8.8 scrub ok
Dec 09 16:06:21 compute-0 sudo[98657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:06:21 compute-0 sudo[98657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:21 compute-0 sudo[98657]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:06:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:06:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:06:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:06:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:06:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:06:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:06:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:06:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:06:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:06:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:06:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:06:21 compute-0 sudo[98813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:06:21 compute-0 sudo[98813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:21 compute-0 sudo[98813]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:21 compute-0 sudo[98838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:06:21 compute-0 sudo[98838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:21 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Dec 09 16:06:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 09 16:06:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 09 16:06:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 09 16:06:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 09 16:06:21 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 85 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=85 pruub=13.708499908s) [0] r=-1 lpr=85 pi=[60,85)/1 crt=40'39 lcod 0'0 active pruub 138.140228271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:21 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 85 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=85 pruub=13.708456993s) [0] r=-1 lpr=85 pi=[60,85)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 138.140228271s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:21 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 85 pg[6.9( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=85) [0] r=0 lpr=85 pi=[60,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:21 compute-0 sudo[98936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odxnvauytdbtoxutoicbmsivbfqtofzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296381.590571-32-984210020355/AnsiballZ_command.py'
Dec 09 16:06:21 compute-0 sudo[98936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:06:22 compute-0 podman[98952]: 2025-12-09 16:06:22.074614087 +0000 UTC m=+0.037354839 container create bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 09 16:06:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:06:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:06:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:06:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:06:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:06:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:06:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 09 16:06:22 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 09 16:06:22 compute-0 systemd[1]: Started libpod-conmon-bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7.scope.
Dec 09 16:06:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:06:22 compute-0 podman[98952]: 2025-12-09 16:06:22.151943288 +0000 UTC m=+0.114684080 container init bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_meninsky, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:06:22 compute-0 podman[98952]: 2025-12-09 16:06:22.058752842 +0000 UTC m=+0.021493614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:06:22 compute-0 podman[98952]: 2025-12-09 16:06:22.164239513 +0000 UTC m=+0.126980265 container start bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_meninsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:06:22 compute-0 podman[98952]: 2025-12-09 16:06:22.167408952 +0000 UTC m=+0.130149754 container attach bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_meninsky, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:06:22 compute-0 systemd[1]: libpod-bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7.scope: Deactivated successfully.
Dec 09 16:06:22 compute-0 eloquent_meninsky[98969]: 167 167
Dec 09 16:06:22 compute-0 conmon[98969]: conmon bdd6e25f61fb63b73682 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7.scope/container/memory.events
Dec 09 16:06:22 compute-0 podman[98952]: 2025-12-09 16:06:22.169915453 +0000 UTC m=+0.132656205 container died bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_meninsky, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:06:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1f29a218debc05c738598e949864eb71f9d33d4280f4cb51ebfad80b51a9415-merged.mount: Deactivated successfully.
Dec 09 16:06:22 compute-0 python3.9[98940]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:06:22 compute-0 podman[98952]: 2025-12-09 16:06:22.214433372 +0000 UTC m=+0.177174164 container remove bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_meninsky, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 16:06:22 compute-0 systemd[1]: libpod-conmon-bdd6e25f61fb63b7368264250da71d2f2fbec7a4ef6b2a18d84ec2acd9d1e7c7.scope: Deactivated successfully.
Dec 09 16:06:22 compute-0 podman[98998]: 2025-12-09 16:06:22.396531804 +0000 UTC m=+0.056360663 container create 42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sutherland, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:06:22 compute-0 systemd[1]: Started libpod-conmon-42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc.scope.
Dec 09 16:06:22 compute-0 podman[98998]: 2025-12-09 16:06:22.36751353 +0000 UTC m=+0.027342449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:06:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a46dc1c0d6e049b9251eadbf5695abe3b88a6de393f572cefc1ce57c63ef67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a46dc1c0d6e049b9251eadbf5695abe3b88a6de393f572cefc1ce57c63ef67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a46dc1c0d6e049b9251eadbf5695abe3b88a6de393f572cefc1ce57c63ef67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a46dc1c0d6e049b9251eadbf5695abe3b88a6de393f572cefc1ce57c63ef67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a46dc1c0d6e049b9251eadbf5695abe3b88a6de393f572cefc1ce57c63ef67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:22 compute-0 podman[98998]: 2025-12-09 16:06:22.506937423 +0000 UTC m=+0.166766302 container init 42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:06:22 compute-0 podman[98998]: 2025-12-09 16:06:22.515906665 +0000 UTC m=+0.175735524 container start 42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:06:22 compute-0 podman[98998]: 2025-12-09 16:06:22.51998361 +0000 UTC m=+0.179812489 container attach 42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:06:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 09 16:06:22 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 09 16:06:22 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 09 16:06:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 09 16:06:22 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 09 16:06:22 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 86 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=85/86 n=1 ec=52/27 lis/c=60/60 les/c/f=61/61/0 sis=85) [0] r=0 lpr=85 pi=[60,85)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:22 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 09 16:06:22 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 09 16:06:23 compute-0 wonderful_sutherland[99015]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:06:23 compute-0 wonderful_sutherland[99015]: --> All data devices are unavailable
Dec 09 16:06:23 compute-0 ceph-mon[75222]: pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Dec 09 16:06:23 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 09 16:06:23 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 09 16:06:23 compute-0 ceph-mon[75222]: osdmap e86: 3 total, 3 up, 3 in
Dec 09 16:06:23 compute-0 ceph-mon[75222]: 2.10 scrub starts
Dec 09 16:06:23 compute-0 ceph-mon[75222]: 2.10 scrub ok
Dec 09 16:06:23 compute-0 systemd[1]: libpod-42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc.scope: Deactivated successfully.
Dec 09 16:06:23 compute-0 podman[98998]: 2025-12-09 16:06:23.109143338 +0000 UTC m=+0.768972207 container died 42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-43a46dc1c0d6e049b9251eadbf5695abe3b88a6de393f572cefc1ce57c63ef67-merged.mount: Deactivated successfully.
Dec 09 16:06:23 compute-0 podman[98998]: 2025-12-09 16:06:23.151276591 +0000 UTC m=+0.811105430 container remove 42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sutherland, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 16:06:23 compute-0 systemd[1]: libpod-conmon-42fb4294355838aa945fd330bd6d48e955021bd8caec483e1c8135001968f1fc.scope: Deactivated successfully.
Dec 09 16:06:23 compute-0 sudo[98838]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:23 compute-0 sudo[99050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:06:23 compute-0 sudo[99050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:23 compute-0 sudo[99050]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:23 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 09 16:06:23 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 09 16:06:23 compute-0 sudo[99075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:06:23 compute-0 sudo[99075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:23 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 86 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=62/63 n=1 ec=52/27 lis/c=62/62 les/c/f=63/63/0 sis=86 pruub=13.925700188s) [0] r=-1 lpr=86 pi=[62,86)/1 crt=40'39 lcod 0'0 active pruub 139.853775024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:23 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 86 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=62/63 n=1 ec=52/27 lis/c=62/62 les/c/f=63/63/0 sis=86 pruub=13.925585747s) [0] r=-1 lpr=86 pi=[62,86)/1 crt=40'39 lcod 0'0 unknown NOTIFY pruub 139.853775024s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:23 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 86 pg[6.a( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=62/62 les/c/f=63/63/0 sis=86) [0] r=0 lpr=86 pi=[62,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:23 compute-0 podman[99112]: 2025-12-09 16:06:23.676286789 +0000 UTC m=+0.046581388 container create d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:06:23 compute-0 systemd[1]: Started libpod-conmon-d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f.scope.
Dec 09 16:06:23 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:06:23 compute-0 podman[99112]: 2025-12-09 16:06:23.65921191 +0000 UTC m=+0.029506529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:06:23 compute-0 podman[99112]: 2025-12-09 16:06:23.756172912 +0000 UTC m=+0.126467511 container init d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:06:23 compute-0 podman[99112]: 2025-12-09 16:06:23.761991535 +0000 UTC m=+0.132286134 container start d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:06:23 compute-0 podman[99112]: 2025-12-09 16:06:23.76539114 +0000 UTC m=+0.135685759 container attach d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:06:23 compute-0 thirsty_wozniak[99128]: 167 167
Dec 09 16:06:23 compute-0 systemd[1]: libpod-d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f.scope: Deactivated successfully.
Dec 09 16:06:23 compute-0 conmon[99128]: conmon d9efa198649e93fd321f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f.scope/container/memory.events
Dec 09 16:06:23 compute-0 podman[99112]: 2025-12-09 16:06:23.767732676 +0000 UTC m=+0.138027275 container died d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wozniak, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3500f3a75ed406861fa7b181a6f1229dd09b9924152a1d840627ad7114bc1042-merged.mount: Deactivated successfully.
Dec 09 16:06:23 compute-0 podman[99112]: 2025-12-09 16:06:23.821394703 +0000 UTC m=+0.191689312 container remove d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wozniak, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 09 16:06:23 compute-0 systemd[1]: libpod-conmon-d9efa198649e93fd321f4cd2923084134556576dd7ec3d9564679cd93c8ae97f.scope: Deactivated successfully.
Dec 09 16:06:23 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 77 B/s, 1 objects/s recovering
Dec 09 16:06:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 09 16:06:23 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 09 16:06:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 09 16:06:23 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 09 16:06:23 compute-0 podman[99154]: 2025-12-09 16:06:23.980040286 +0000 UTC m=+0.046215738 container create 34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_golick, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:06:24 compute-0 systemd[1]: Started libpod-conmon-34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a.scope.
Dec 09 16:06:24 compute-0 podman[99154]: 2025-12-09 16:06:23.959226372 +0000 UTC m=+0.025401874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:06:24 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33865ed05d01d999e18c01d4fa32ebd7cf8cc93c8da9c8936b3ba81100bd04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33865ed05d01d999e18c01d4fa32ebd7cf8cc93c8da9c8936b3ba81100bd04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33865ed05d01d999e18c01d4fa32ebd7cf8cc93c8da9c8936b3ba81100bd04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33865ed05d01d999e18c01d4fa32ebd7cf8cc93c8da9c8936b3ba81100bd04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:24 compute-0 podman[99154]: 2025-12-09 16:06:24.077041849 +0000 UTC m=+0.143217321 container init 34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:06:24 compute-0 podman[99154]: 2025-12-09 16:06:24.08845933 +0000 UTC m=+0.154634782 container start 34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_golick, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 09 16:06:24 compute-0 podman[99154]: 2025-12-09 16:06:24.095356283 +0000 UTC m=+0.161531755 container attach 34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 09 16:06:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 09 16:06:24 compute-0 ceph-mon[75222]: 7.1f scrub starts
Dec 09 16:06:24 compute-0 ceph-mon[75222]: 7.1f scrub ok
Dec 09 16:06:24 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 09 16:06:24 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 09 16:06:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 09 16:06:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 09 16:06:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 09 16:06:24 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 09 16:06:24 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 87 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=66/67 n=1 ec=52/27 lis/c=66/66 les/c/f=67/67/0 sis=87 pruub=13.484531403s) [1] r=-1 lpr=87 pi=[66,87)/1 crt=40'39 active pruub 144.294937134s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:24 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 87 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=66/67 n=1 ec=52/27 lis/c=66/66 les/c/f=67/67/0 sis=87 pruub=13.484432220s) [1] r=-1 lpr=87 pi=[66,87)/1 crt=40'39 unknown NOTIFY pruub 144.294937134s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:24 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 87 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=86/87 n=1 ec=52/27 lis/c=62/62 les/c/f=63/63/0 sis=86) [0] r=0 lpr=86 pi=[62,86)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:24 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 87 pg[6.b( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=66/66 les/c/f=67/67/0 sis=87) [1] r=0 lpr=87 pi=[66,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 09 16:06:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 09 16:06:24 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 09 16:06:24 compute-0 friendly_golick[99171]: {
Dec 09 16:06:24 compute-0 friendly_golick[99171]:     "0": [
Dec 09 16:06:24 compute-0 friendly_golick[99171]:         {
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "devices": [
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "/dev/loop3"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             ],
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_name": "ceph_lv0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_size": "21470642176",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "name": "ceph_lv0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "tags": {
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cluster_name": "ceph",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.crush_device_class": "",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.encrypted": "0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.objectstore": "bluestore",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osd_id": "0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.type": "block",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.vdo": "0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.with_tpm": "0"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             },
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "type": "block",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "vg_name": "ceph_vg0"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:         }
Dec 09 16:06:24 compute-0 friendly_golick[99171]:     ],
Dec 09 16:06:24 compute-0 friendly_golick[99171]:     "1": [
Dec 09 16:06:24 compute-0 friendly_golick[99171]:         {
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "devices": [
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "/dev/loop4"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             ],
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_name": "ceph_lv1",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_size": "21470642176",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "name": "ceph_lv1",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "tags": {
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cluster_name": "ceph",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.crush_device_class": "",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.encrypted": "0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.objectstore": "bluestore",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osd_id": "1",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.type": "block",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.vdo": "0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.with_tpm": "0"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             },
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "type": "block",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "vg_name": "ceph_vg1"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:         }
Dec 09 16:06:24 compute-0 friendly_golick[99171]:     ],
Dec 09 16:06:24 compute-0 friendly_golick[99171]:     "2": [
Dec 09 16:06:24 compute-0 friendly_golick[99171]:         {
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "devices": [
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "/dev/loop5"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             ],
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_name": "ceph_lv2",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_size": "21470642176",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "name": "ceph_lv2",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "tags": {
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.cluster_name": "ceph",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.crush_device_class": "",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.encrypted": "0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.objectstore": "bluestore",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osd_id": "2",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.type": "block",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.vdo": "0",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:                 "ceph.with_tpm": "0"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             },
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "type": "block",
Dec 09 16:06:24 compute-0 friendly_golick[99171]:             "vg_name": "ceph_vg2"
Dec 09 16:06:24 compute-0 friendly_golick[99171]:         }
Dec 09 16:06:24 compute-0 friendly_golick[99171]:     ]
Dec 09 16:06:24 compute-0 friendly_golick[99171]: }
Dec 09 16:06:24 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 09 16:06:24 compute-0 systemd[1]: libpod-34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a.scope: Deactivated successfully.
Dec 09 16:06:24 compute-0 podman[99154]: 2025-12-09 16:06:24.433211336 +0000 UTC m=+0.499386798 container died 34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb33865ed05d01d999e18c01d4fa32ebd7cf8cc93c8da9c8936b3ba81100bd04-merged.mount: Deactivated successfully.
Dec 09 16:06:24 compute-0 podman[99154]: 2025-12-09 16:06:24.478104577 +0000 UTC m=+0.544280179 container remove 34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:06:24 compute-0 systemd[1]: libpod-conmon-34e34e4e02fd133a22d06ae90dadda9068f69385f6cdf4d3105dd868c111bb6a.scope: Deactivated successfully.
Dec 09 16:06:24 compute-0 sudo[99075]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:24 compute-0 sudo[99194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:06:24 compute-0 sudo[99194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:24 compute-0 sudo[99194]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:24 compute-0 sshd-session[99176]: Invalid user dspace from 146.190.31.45 port 37160
Dec 09 16:06:24 compute-0 sudo[99219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:06:24 compute-0 sudo[99219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:24 compute-0 sshd-session[99176]: Connection closed by invalid user dspace 146.190.31.45 port 37160 [preauth]
Dec 09 16:06:24 compute-0 podman[99255]: 2025-12-09 16:06:24.957861544 +0000 UTC m=+0.073515894 container create 3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:06:25 compute-0 systemd[1]: Started libpod-conmon-3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9.scope.
Dec 09 16:06:25 compute-0 podman[99255]: 2025-12-09 16:06:24.906636946 +0000 UTC m=+0.022291326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:06:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:06:25 compute-0 podman[99255]: 2025-12-09 16:06:25.04926828 +0000 UTC m=+0.164922640 container init 3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:06:25 compute-0 podman[99255]: 2025-12-09 16:06:25.056557895 +0000 UTC m=+0.172212245 container start 3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_torvalds, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:06:25 compute-0 podman[99255]: 2025-12-09 16:06:25.060584338 +0000 UTC m=+0.176238708 container attach 3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:06:25 compute-0 reverent_torvalds[99271]: 167 167
Dec 09 16:06:25 compute-0 systemd[1]: libpod-3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9.scope: Deactivated successfully.
Dec 09 16:06:25 compute-0 podman[99255]: 2025-12-09 16:06:25.06385719 +0000 UTC m=+0.179511540 container died 3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_torvalds, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:06:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 09 16:06:25 compute-0 ceph-mon[75222]: pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 77 B/s, 1 objects/s recovering
Dec 09 16:06:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 09 16:06:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 09 16:06:25 compute-0 ceph-mon[75222]: osdmap e87: 3 total, 3 up, 3 in
Dec 09 16:06:25 compute-0 ceph-mon[75222]: 11.10 scrub starts
Dec 09 16:06:25 compute-0 ceph-mon[75222]: 11.10 scrub ok
Dec 09 16:06:25 compute-0 ceph-mon[75222]: 11.0 scrub starts
Dec 09 16:06:25 compute-0 ceph-mon[75222]: 11.0 scrub ok
Dec 09 16:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a06ae21fb5257a0579ca04f8d04f3a23c7382b38df78d8b4e299442624d9340-merged.mount: Deactivated successfully.
Dec 09 16:06:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 09 16:06:25 compute-0 podman[99255]: 2025-12-09 16:06:25.208680865 +0000 UTC m=+0.324335235 container remove 3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_torvalds, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:06:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 09 16:06:25 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 88 pg[6.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=87/88 n=1 ec=52/27 lis/c=66/66 les/c/f=67/67/0 sis=87) [1] r=0 lpr=87 pi=[66,87)/1 crt=40'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:25 compute-0 systemd[1]: libpod-conmon-3a23fdfe2304aad0261698badd05fa5e02de822ba6220a4fc65df0318da78fc9.scope: Deactivated successfully.
Dec 09 16:06:25 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 09 16:06:25 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 09 16:06:25 compute-0 podman[99298]: 2025-12-09 16:06:25.38446777 +0000 UTC m=+0.044896401 container create e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:06:25 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 09 16:06:25 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 09 16:06:25 compute-0 systemd[1]: Started libpod-conmon-e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb.scope.
Dec 09 16:06:25 compute-0 podman[99298]: 2025-12-09 16:06:25.36204492 +0000 UTC m=+0.022473581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:06:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f15fdb095a519578b206e4a09824636c95627d9c369f57be08f978c9e37b3664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f15fdb095a519578b206e4a09824636c95627d9c369f57be08f978c9e37b3664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f15fdb095a519578b206e4a09824636c95627d9c369f57be08f978c9e37b3664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f15fdb095a519578b206e4a09824636c95627d9c369f57be08f978c9e37b3664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:06:25 compute-0 podman[99298]: 2025-12-09 16:06:25.497335428 +0000 UTC m=+0.157764079 container init e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_satoshi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:06:25 compute-0 podman[99298]: 2025-12-09 16:06:25.506494615 +0000 UTC m=+0.166923266 container start e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_satoshi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:06:25 compute-0 podman[99298]: 2025-12-09 16:06:25.510501148 +0000 UTC m=+0.170929799 container attach e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:06:25 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 09 16:06:25 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 09 16:06:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:06:25
Dec 09 16:06:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:06:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:06:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.meta', '.rgw.root', 'vms', 'images', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 09 16:06:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:06:25 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 09 16:06:25 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 09 16:06:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 09 16:06:25 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 09 16:06:26 compute-0 lvm[99399]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:06:26 compute-0 lvm[99400]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:06:26 compute-0 lvm[99399]: VG ceph_vg0 finished
Dec 09 16:06:26 compute-0 lvm[99400]: VG ceph_vg1 finished
Dec 09 16:06:26 compute-0 lvm[99402]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:06:26 compute-0 lvm[99402]: VG ceph_vg2 finished
Dec 09 16:06:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 09 16:06:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 09 16:06:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 09 16:06:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 09 16:06:26 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 09 16:06:26 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 09 16:06:26 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 09 16:06:26 compute-0 ceph-mon[75222]: osdmap e88: 3 total, 3 up, 3 in
Dec 09 16:06:26 compute-0 ceph-mon[75222]: 8.10 scrub starts
Dec 09 16:06:26 compute-0 ceph-mon[75222]: 8.10 scrub ok
Dec 09 16:06:26 compute-0 ceph-mon[75222]: 8.3 scrub starts
Dec 09 16:06:26 compute-0 ceph-mon[75222]: 8.3 scrub ok
Dec 09 16:06:26 compute-0 ceph-mon[75222]: 5.17 scrub starts
Dec 09 16:06:26 compute-0 ceph-mon[75222]: 5.17 scrub ok
Dec 09 16:06:26 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 09 16:06:26 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 09 16:06:26 compute-0 boring_satoshi[99315]: {}
Dec 09 16:06:26 compute-0 systemd[1]: libpod-e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb.scope: Deactivated successfully.
Dec 09 16:06:26 compute-0 systemd[1]: libpod-e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb.scope: Consumed 1.343s CPU time.
Dec 09 16:06:26 compute-0 podman[99298]: 2025-12-09 16:06:26.366932659 +0000 UTC m=+1.027361320 container died e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:06:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f15fdb095a519578b206e4a09824636c95627d9c369f57be08f978c9e37b3664-merged.mount: Deactivated successfully.
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:06:26 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 89 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=14.845924377s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=46'483 lcod 0'0 active pruub 143.964385986s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:26 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 89 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=14.850517273s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=71'486 lcod 71'486 active pruub 143.968994141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:26 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 89 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=14.850234985s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=71'486 lcod 71'486 unknown NOTIFY pruub 143.968994141s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:26 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 89 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=14.845666885s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 143.964385986s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:06:26 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 89 pg[9.c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:26 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 89 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:26 compute-0 podman[99298]: 2025-12-09 16:06:26.595776963 +0000 UTC m=+1.256205614 container remove e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:06:26 compute-0 systemd[1]: libpod-conmon-e8a293bb58cafaa0e9d006159ab6d78c08ced21a3ddf7bcc55614b3be8c595cb.scope: Deactivated successfully.
Dec 09 16:06:26 compute-0 sudo[99219]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:06:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:06:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:06:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:06:26 compute-0 sudo[99422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:06:26 compute-0 sudo[99422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:06:26 compute-0 sudo[99422]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 09 16:06:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 09 16:06:27 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 09 16:06:27 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 90 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] r=0 lpr=90 pi=[56,90)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:27 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 90 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] r=0 lpr=90 pi=[56,90)/1 crt=71'486 lcod 71'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:27 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 90 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=56/57 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] r=0 lpr=90 pi=[56,90)/1 crt=71'486 lcod 71'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:27 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 90 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=56/57 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] r=0 lpr=90 pi=[56,90)/1 crt=46'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:27 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 90 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:27 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 90 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:27 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 90 pg[9.c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:27 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 90 pg[9.c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:27 compute-0 ceph-mon[75222]: pgmap v169: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 09 16:06:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 09 16:06:27 compute-0 ceph-mon[75222]: 5.1e scrub starts
Dec 09 16:06:27 compute-0 ceph-mon[75222]: osdmap e89: 3 total, 3 up, 3 in
Dec 09 16:06:27 compute-0 ceph-mon[75222]: 5.1e scrub ok
Dec 09 16:06:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:06:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:06:27 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 09 16:06:27 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 09 16:06:27 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 09 16:06:27 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 09 16:06:27 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 09 16:06:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 09 16:06:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 09 16:06:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 09 16:06:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 09 16:06:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 09 16:06:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 09 16:06:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 09 16:06:28 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 09 16:06:28 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 91 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=70/71 n=1 ec=52/27 lis/c=70/70 les/c/f=71/71/0 sis=91 pruub=13.298480988s) [1] r=-1 lpr=91 pi=[70,91)/1 crt=40'39 active pruub 148.346603394s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:28 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 91 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=70/71 n=1 ec=52/27 lis/c=70/70 les/c/f=71/71/0 sis=91 pruub=13.298338890s) [1] r=-1 lpr=91 pi=[70,91)/1 crt=40'39 unknown NOTIFY pruub 148.346603394s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:28 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 91 pg[6.d( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=70/70 les/c/f=71/71/0 sis=91) [1] r=0 lpr=91 pi=[70,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:28 compute-0 ceph-mon[75222]: osdmap e90: 3 total, 3 up, 3 in
Dec 09 16:06:28 compute-0 ceph-mon[75222]: 8.1 scrub starts
Dec 09 16:06:28 compute-0 ceph-mon[75222]: 8.1 scrub ok
Dec 09 16:06:28 compute-0 ceph-mon[75222]: 5.8 scrub starts
Dec 09 16:06:28 compute-0 ceph-mon[75222]: 5.8 scrub ok
Dec 09 16:06:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 09 16:06:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 09 16:06:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 09 16:06:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 09 16:06:28 compute-0 ceph-mon[75222]: osdmap e91: 3 total, 3 up, 3 in
Dec 09 16:06:28 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 09 16:06:28 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 09 16:06:28 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 91 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=90/91 n=6 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] async=[2] r=0 lpr=90 pi=[56,90)/1 crt=71'487 lcod 71'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:28 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 91 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=90/91 n=7 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[1] async=[2] r=0 lpr=90 pi=[56,90)/1 crt=46'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 09 16:06:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 09 16:06:29 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 09 16:06:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 92 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 92 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=0/0 n=7 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:29 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 92 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=90/91 n=7 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.362146378s) [2] async=[2] r=-1 lpr=92 pi=[56,92)/1 crt=46'483 lcod 0'0 active pruub 147.222702026s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:29 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 92 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=90/91 n=7 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.362091064s) [2] r=-1 lpr=92 pi=[56,92)/1 crt=46'483 lcod 0'0 unknown NOTIFY pruub 147.222702026s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:29 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 92 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=90/91 n=6 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.358679771s) [2] async=[2] r=-1 lpr=92 pi=[56,92)/1 crt=71'487 lcod 71'486 active pruub 147.220489502s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:29 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 92 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=90/91 n=6 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.358632088s) [2] r=-1 lpr=92 pi=[56,92)/1 crt=71'487 lcod 71'486 unknown NOTIFY pruub 147.220489502s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 92 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=0/0 n=6 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 pct=0'0 crt=71'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:29 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 92 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=0/0 n=6 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 crt=71'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:29 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 92 pg[6.d( v 40'39 lc 39'13 (0'0,40'39] local-lis/les=91/92 n=1 ec=52/27 lis/c=70/70 les/c/f=71/71/0 sis=91) [1] r=0 lpr=91 pi=[70,91)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:29 compute-0 ceph-mon[75222]: pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:29 compute-0 ceph-mon[75222]: 8.0 scrub starts
Dec 09 16:06:29 compute-0 ceph-mon[75222]: 8.0 scrub ok
Dec 09 16:06:29 compute-0 ceph-mon[75222]: osdmap e92: 3 total, 3 up, 3 in
Dec 09 16:06:29 compute-0 sudo[98936]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:29 compute-0 sshd-session[98439]: Connection closed by 192.168.122.30 port 36464
Dec 09 16:06:29 compute-0 sshd-session[98436]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:06:29 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 09 16:06:29 compute-0 systemd[1]: session-34.scope: Consumed 8.302s CPU time.
Dec 09 16:06:29 compute-0 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Dec 09 16:06:29 compute-0 systemd-logind[786]: Removed session 34.
Dec 09 16:06:29 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 3 objects/s recovering
Dec 09 16:06:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 09 16:06:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 09 16:06:30 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 09 16:06:30 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 93 pg[9.c( v 46'483 (0'0,46'483] local-lis/les=92/93 n=7 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:30 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 93 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=92/93 n=6 ec=56/40 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 crt=71'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:31 compute-0 ceph-mon[75222]: pgmap v175: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 3 objects/s recovering
Dec 09 16:06:31 compute-0 ceph-mon[75222]: osdmap e93: 3 total, 3 up, 3 in
Dec 09 16:06:31 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 09 16:06:31 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 09 16:06:31 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 3 objects/s recovering
Dec 09 16:06:32 compute-0 ceph-mon[75222]: 3.b scrub starts
Dec 09 16:06:32 compute-0 ceph-mon[75222]: 3.b scrub ok
Dec 09 16:06:33 compute-0 ceph-mon[75222]: pgmap v177: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 3 objects/s recovering
Dec 09 16:06:33 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 09 16:06:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:34 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 09 16:06:34 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 09 16:06:34 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 09 16:06:34 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 09 16:06:35 compute-0 ceph-mon[75222]: pgmap v178: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 09 16:06:35 compute-0 ceph-mon[75222]: 2.19 scrub starts
Dec 09 16:06:35 compute-0 ceph-mon[75222]: 2.19 scrub ok
Dec 09 16:06:35 compute-0 ceph-mon[75222]: 2.e scrub starts
Dec 09 16:06:35 compute-0 ceph-mon[75222]: 2.e scrub ok
Dec 09 16:06:35 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 09 16:06:35 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 09 16:06:35 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 09 16:06:35 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 09 16:06:35 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 64 B/s, 2 objects/s recovering
Dec 09 16:06:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 09 16:06:35 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 09 16:06:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 09 16:06:35 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 09 16:06:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 09 16:06:36 compute-0 ceph-mon[75222]: 2.18 scrub starts
Dec 09 16:06:36 compute-0 ceph-mon[75222]: 2.18 scrub ok
Dec 09 16:06:36 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 09 16:06:36 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 09 16:06:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 09 16:06:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 09 16:06:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 09 16:06:36 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 09 16:06:36 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.779002836535062e-06 of space, bias 4.0, pg target 0.0021348034038420746 quantized to 16 (current 16)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:06:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:06:36 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 09 16:06:36 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 09 16:06:36 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 09 16:06:36 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 09 16:06:36 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 09 16:06:37 compute-0 ceph-mon[75222]: 11.c scrub starts
Dec 09 16:06:37 compute-0 ceph-mon[75222]: 11.c scrub ok
Dec 09 16:06:37 compute-0 ceph-mon[75222]: pgmap v179: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 64 B/s, 2 objects/s recovering
Dec 09 16:06:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 09 16:06:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 09 16:06:37 compute-0 ceph-mon[75222]: osdmap e94: 3 total, 3 up, 3 in
Dec 09 16:06:37 compute-0 ceph-mon[75222]: 3.f scrub starts
Dec 09 16:06:37 compute-0 ceph-mon[75222]: 3.f scrub ok
Dec 09 16:06:37 compute-0 ceph-mon[75222]: 10.5 scrub starts
Dec 09 16:06:37 compute-0 ceph-mon[75222]: 10.5 scrub ok
Dec 09 16:06:37 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 09 16:06:37 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 09 16:06:37 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 09 16:06:37 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 09 16:06:37 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Dec 09 16:06:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 09 16:06:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 09 16:06:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 09 16:06:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 09 16:06:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 09 16:06:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 09 16:06:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 09 16:06:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 09 16:06:38 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 09 16:06:38 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 95 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=66/67 n=1 ec=52/27 lis/c=66/66 les/c/f=67/67/0 sis=95 pruub=15.189985275s) [2] r=-1 lpr=95 pi=[66,95)/1 crt=40'39 active pruub 160.295257568s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:38 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 95 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=66/67 n=1 ec=52/27 lis/c=66/66 les/c/f=67/67/0 sis=95 pruub=15.189857483s) [2] r=-1 lpr=95 pi=[66,95)/1 crt=40'39 unknown NOTIFY pruub 160.295257568s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:38 compute-0 ceph-mon[75222]: 3.4 scrub starts
Dec 09 16:06:38 compute-0 ceph-mon[75222]: 3.4 scrub ok
Dec 09 16:06:38 compute-0 ceph-mon[75222]: 5.a scrub starts
Dec 09 16:06:38 compute-0 ceph-mon[75222]: 5.a scrub ok
Dec 09 16:06:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 09 16:06:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 09 16:06:38 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 95 pg[6.f( empty local-lis/les=0/0 n=0 ec=52/27 lis/c=66/66 les/c/f=67/67/0 sis=95) [2] r=0 lpr=95 pi=[66,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:38 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 09 16:06:38 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 09 16:06:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 09 16:06:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 09 16:06:39 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 09 16:06:39 compute-0 ceph-mon[75222]: 7.0 scrub starts
Dec 09 16:06:39 compute-0 ceph-mon[75222]: 7.0 scrub ok
Dec 09 16:06:39 compute-0 ceph-mon[75222]: pgmap v181: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Dec 09 16:06:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 09 16:06:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 09 16:06:39 compute-0 ceph-mon[75222]: osdmap e95: 3 total, 3 up, 3 in
Dec 09 16:06:39 compute-0 ceph-mon[75222]: 2.c scrub starts
Dec 09 16:06:39 compute-0 ceph-mon[75222]: 2.c scrub ok
Dec 09 16:06:39 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 96 pg[6.f( v 40'39 lc 39'1 (0'0,40'39] local-lis/les=95/96 n=1 ec=52/27 lis/c=66/66 les/c/f=67/67/0 sis=95) [2] r=0 lpr=95 pi=[66,95)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:39 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 09 16:06:39 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 09 16:06:39 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 09 16:06:39 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 09 16:06:39 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Dec 09 16:06:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 09 16:06:39 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 09 16:06:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 09 16:06:40 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 09 16:06:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 09 16:06:40 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 09 16:06:40 compute-0 ceph-mon[75222]: osdmap e96: 3 total, 3 up, 3 in
Dec 09 16:06:40 compute-0 ceph-mon[75222]: 5.b scrub starts
Dec 09 16:06:40 compute-0 ceph-mon[75222]: 5.b scrub ok
Dec 09 16:06:40 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 09 16:06:40 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 09 16:06:40 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 09 16:06:40 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 09 16:06:40 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 09 16:06:41 compute-0 ceph-mon[75222]: 3.0 scrub starts
Dec 09 16:06:41 compute-0 ceph-mon[75222]: 3.0 scrub ok
Dec 09 16:06:41 compute-0 ceph-mon[75222]: pgmap v184: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Dec 09 16:06:41 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 09 16:06:41 compute-0 ceph-mon[75222]: osdmap e97: 3 total, 3 up, 3 in
Dec 09 16:06:41 compute-0 ceph-mon[75222]: 10.3 scrub starts
Dec 09 16:06:41 compute-0 ceph-mon[75222]: 10.3 scrub ok
Dec 09 16:06:41 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 09 16:06:41 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 09 16:06:41 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 09 16:06:41 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 09 16:06:41 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 09 16:06:41 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 09 16:06:42 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 09 16:06:42 compute-0 ceph-mon[75222]: 11.a scrub starts
Dec 09 16:06:42 compute-0 ceph-mon[75222]: 11.a scrub ok
Dec 09 16:06:42 compute-0 ceph-mon[75222]: 11.14 scrub starts
Dec 09 16:06:42 compute-0 ceph-mon[75222]: 11.14 scrub ok
Dec 09 16:06:42 compute-0 ceph-mon[75222]: 2.0 scrub starts
Dec 09 16:06:42 compute-0 ceph-mon[75222]: 2.0 scrub ok
Dec 09 16:06:42 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 09 16:06:42 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 09 16:06:42 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 09 16:06:42 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 09 16:06:42 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 09 16:06:42 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 09 16:06:43 compute-0 ceph-mon[75222]: pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:43 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 09 16:06:43 compute-0 ceph-mon[75222]: osdmap e98: 3 total, 3 up, 3 in
Dec 09 16:06:43 compute-0 ceph-mon[75222]: 5.0 scrub starts
Dec 09 16:06:43 compute-0 ceph-mon[75222]: 5.0 scrub ok
Dec 09 16:06:43 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 09 16:06:43 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 09 16:06:43 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 111 B/s, 0 objects/s recovering
Dec 09 16:06:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 09 16:06:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 09 16:06:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 09 16:06:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 09 16:06:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 09 16:06:44 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 09 16:06:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 09 16:06:44 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 09 16:06:44 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 09 16:06:45 compute-0 ceph-mon[75222]: 3.2 scrub starts
Dec 09 16:06:45 compute-0 ceph-mon[75222]: 3.2 scrub ok
Dec 09 16:06:45 compute-0 ceph-mon[75222]: pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 111 B/s, 0 objects/s recovering
Dec 09 16:06:45 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 09 16:06:45 compute-0 ceph-mon[75222]: osdmap e99: 3 total, 3 up, 3 in
Dec 09 16:06:45 compute-0 ceph-mon[75222]: 10.0 scrub starts
Dec 09 16:06:45 compute-0 ceph-mon[75222]: 10.0 scrub ok
Dec 09 16:06:45 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Dec 09 16:06:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 09 16:06:45 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 09 16:06:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 09 16:06:46 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 09 16:06:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 09 16:06:46 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 09 16:06:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 100 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=64/65 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=100 pruub=8.828620911s) [2] r=-1 lpr=100 pi=[64,100)/1 crt=71'484 lcod 71'484 active pruub 162.034606934s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:46 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 09 16:06:46 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 100 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=64/65 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=100 pruub=8.828352928s) [2] r=-1 lpr=100 pi=[64,100)/1 crt=71'484 lcod 71'484 unknown NOTIFY pruub 162.034606934s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:46 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 100 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=100) [2] r=0 lpr=100 pi=[64,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:46 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 09 16:06:46 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 09 16:06:46 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 09 16:06:46 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 09 16:06:47 compute-0 sshd-session[99480]: Accepted publickey for zuul from 192.168.122.30 port 44366 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:06:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 09 16:06:47 compute-0 systemd-logind[786]: New session 35 of user zuul.
Dec 09 16:06:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 09 16:06:47 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 09 16:06:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 101 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=64/65 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=101) [2]/[0] r=0 lpr=101 pi=[64,101)/1 crt=71'484 lcod 71'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:47 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 101 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=64/65 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=101) [2]/[0] r=0 lpr=101 pi=[64,101)/1 crt=71'484 lcod 71'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 101 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[64,101)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:47 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 101 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[64,101)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:47 compute-0 systemd[1]: Started Session 35 of User zuul.
Dec 09 16:06:47 compute-0 ceph-mon[75222]: pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Dec 09 16:06:47 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 09 16:06:47 compute-0 ceph-mon[75222]: osdmap e100: 3 total, 3 up, 3 in
Dec 09 16:06:47 compute-0 ceph-mon[75222]: 2.1 scrub starts
Dec 09 16:06:47 compute-0 ceph-mon[75222]: 2.1 scrub ok
Dec 09 16:06:47 compute-0 sshd-session[99480]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:06:47 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 09 16:06:47 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 09 16:06:47 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 0 objects/s recovering
Dec 09 16:06:47 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 09 16:06:47 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 09 16:06:48 compute-0 python3.9[99633]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 09 16:06:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 09 16:06:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 09 16:06:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 09 16:06:48 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 09 16:06:48 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 102 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=101/102 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[64,101)/1 crt=71'485 lcod 71'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:48 compute-0 ceph-mon[75222]: 7.d scrub starts
Dec 09 16:06:48 compute-0 ceph-mon[75222]: 7.d scrub ok
Dec 09 16:06:48 compute-0 ceph-mon[75222]: osdmap e101: 3 total, 3 up, 3 in
Dec 09 16:06:48 compute-0 ceph-mon[75222]: 8.7 scrub starts
Dec 09 16:06:48 compute-0 ceph-mon[75222]: 8.7 scrub ok
Dec 09 16:06:48 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 09 16:06:48 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 09 16:06:48 compute-0 ceph-mon[75222]: osdmap e102: 3 total, 3 up, 3 in
Dec 09 16:06:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 09 16:06:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 09 16:06:49 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 09 16:06:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 103 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=101/102 n=6 ec=56/40 lis/c=101/64 les/c/f=102/65/0 sis=103 pruub=15.200422287s) [2] async=[2] r=-1 lpr=103 pi=[64,103)/1 crt=71'485 lcod 71'484 active pruub 171.230148315s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:49 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 103 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=101/102 n=6 ec=56/40 lis/c=101/64 les/c/f=102/65/0 sis=103 pruub=15.200315475s) [2] r=-1 lpr=103 pi=[64,103)/1 crt=71'485 lcod 71'484 unknown NOTIFY pruub 171.230148315s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:49 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 103 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=0/0 n=6 ec=56/40 lis/c=101/64 les/c/f=102/65/0 sis=103) [2] r=0 lpr=103 pi=[64,103)/1 pct=0'0 crt=71'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:49 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 103 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=0/0 n=6 ec=56/40 lis/c=101/64 les/c/f=102/65/0 sis=103) [2] r=0 lpr=103 pi=[64,103)/1 crt=71'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 09 16:06:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 09 16:06:49 compute-0 ceph-mon[75222]: pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 0 objects/s recovering
Dec 09 16:06:49 compute-0 ceph-mon[75222]: osdmap e103: 3 total, 3 up, 3 in
Dec 09 16:06:49 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 09 16:06:49 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 09 16:06:49 compute-0 python3.9[99807]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:06:49 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 09 16:06:49 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 09 16:06:49 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 09 16:06:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 09 16:06:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 09 16:06:50 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 09 16:06:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 09 16:06:50 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 09 16:06:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 104 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=104 pruub=11.992457390s) [1] r=-1 lpr=104 pi=[63,104)/1 crt=46'483 active pruub 169.027984619s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:50 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 104 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=104 pruub=11.992415428s) [1] r=-1 lpr=104 pi=[63,104)/1 crt=46'483 unknown NOTIFY pruub 169.027984619s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:50 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 104 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=104) [1] r=0 lpr=104 pi=[63,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:50 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 104 pg[9.13( v 71'485 (0'0,71'485] local-lis/les=103/104 n=6 ec=56/40 lis/c=101/64 les/c/f=102/65/0 sis=103) [2] r=0 lpr=103 pi=[64,103)/1 crt=71'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:50 compute-0 sudo[99961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwnpreecovjwwagzxnvxejqvpbfcvron ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296410.0548568-45-40739457870453/AnsiballZ_command.py'
Dec 09 16:06:50 compute-0 sudo[99961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:06:50 compute-0 ceph-mon[75222]: 3.c scrub starts
Dec 09 16:06:50 compute-0 ceph-mon[75222]: 3.c scrub ok
Dec 09 16:06:50 compute-0 ceph-mon[75222]: 5.6 scrub starts
Dec 09 16:06:50 compute-0 ceph-mon[75222]: 5.6 scrub ok
Dec 09 16:06:50 compute-0 ceph-mon[75222]: 11.5 scrub starts
Dec 09 16:06:50 compute-0 ceph-mon[75222]: 11.5 scrub ok
Dec 09 16:06:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 09 16:06:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 09 16:06:50 compute-0 ceph-mon[75222]: osdmap e104: 3 total, 3 up, 3 in
Dec 09 16:06:50 compute-0 python3.9[99963]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:06:50 compute-0 sudo[99961]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:50 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 09 16:06:50 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 09 16:06:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 09 16:06:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 09 16:06:51 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 09 16:06:51 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=105) [1]/[0] r=-1 lpr=105 pi=[63,105)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:51 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=105) [1]/[0] r=-1 lpr=105 pi=[63,105)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 105 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=105) [1]/[0] r=0 lpr=105 pi=[63,105)/1 crt=46'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:51 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 105 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=63/64 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=105) [1]/[0] r=0 lpr=105 pi=[63,105)/1 crt=46'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:51 compute-0 sudo[100114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjxkgzftgwkllsdvohxrxypsjoncjgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296411.038217-57-111835185495627/AnsiballZ_stat.py'
Dec 09 16:06:51 compute-0 sudo[100114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:06:51 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 09 16:06:51 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 09 16:06:51 compute-0 ceph-mon[75222]: pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:51 compute-0 ceph-mon[75222]: 3.d scrub starts
Dec 09 16:06:51 compute-0 ceph-mon[75222]: 3.d scrub ok
Dec 09 16:06:51 compute-0 ceph-mon[75222]: osdmap e105: 3 total, 3 up, 3 in
Dec 09 16:06:51 compute-0 python3.9[100116]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:06:51 compute-0 sudo[100114]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:51 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 09 16:06:51 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 09 16:06:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 09 16:06:52 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 09 16:06:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 09 16:06:52 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 09 16:06:52 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 106 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=105/106 n=6 ec=56/40 lis/c=63/63 les/c/f=64/64/0 sis=105) [1]/[0] async=[1] r=0 lpr=105 pi=[63,105)/1 crt=46'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:52 compute-0 sudo[100268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkuokxvynaszdlnokhcxcoysepnzqsua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296411.9084017-68-3712732299758/AnsiballZ_file.py'
Dec 09 16:06:52 compute-0 sudo[100268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:06:52 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 09 16:06:52 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 09 16:06:52 compute-0 ceph-mon[75222]: 10.a scrub starts
Dec 09 16:06:52 compute-0 ceph-mon[75222]: 10.a scrub ok
Dec 09 16:06:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 09 16:06:52 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 09 16:06:52 compute-0 ceph-mon[75222]: osdmap e106: 3 total, 3 up, 3 in
Dec 09 16:06:52 compute-0 python3.9[100270]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:06:52 compute-0 sudo[100268]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:53 compute-0 sudo[100420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtkuenkipcpnjmyusdrekmqachxcdtjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296412.8183029-77-19875435605734/AnsiballZ_file.py'
Dec 09 16:06:53 compute-0 sudo[100420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:06:53 compute-0 python3.9[100422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:06:53 compute-0 sudo[100420]: pam_unix(sudo:session): session closed for user root
Dec 09 16:06:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 09 16:06:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 09 16:06:53 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 09 16:06:53 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 107 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=105/106 n=6 ec=56/40 lis/c=105/63 les/c/f=106/64/0 sis=107 pruub=15.000450134s) [1] async=[1] r=-1 lpr=107 pi=[63,107)/1 crt=46'483 active pruub 175.052932739s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:53 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 107 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=105/106 n=6 ec=56/40 lis/c=105/63 les/c/f=106/64/0 sis=107 pruub=15.000391960s) [1] r=-1 lpr=107 pi=[63,107)/1 crt=46'483 unknown NOTIFY pruub 175.052932739s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:53 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 107 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=105/63 les/c/f=106/64/0 sis=107) [1] r=0 lpr=107 pi=[63,107)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:53 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 107 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=105/63 les/c/f=106/64/0 sis=107) [1] r=0 lpr=107 pi=[63,107)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:53 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 09 16:06:53 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 09 16:06:53 compute-0 ceph-mon[75222]: pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:53 compute-0 ceph-mon[75222]: 10.c scrub starts
Dec 09 16:06:53 compute-0 ceph-mon[75222]: 10.c scrub ok
Dec 09 16:06:53 compute-0 ceph-mon[75222]: osdmap e107: 3 total, 3 up, 3 in
Dec 09 16:06:53 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 09 16:06:53 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 09 16:06:53 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Dec 09 16:06:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 09 16:06:53 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 09 16:06:54 compute-0 python3.9[100572]: ansible-ansible.builtin.service_facts Invoked
Dec 09 16:06:54 compute-0 network[100589]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 16:06:54 compute-0 network[100590]: 'network-scripts' will be removed from distribution in near future.
Dec 09 16:06:54 compute-0 network[100591]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 16:06:54 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 106 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=75/76 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=106 pruub=8.474792480s) [0] r=-1 lpr=106 pi=[75,106)/1 crt=46'483 active pruub 160.703674316s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:54 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 107 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=75/76 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=106 pruub=8.474655151s) [0] r=-1 lpr=106 pi=[75,106)/1 crt=46'483 unknown NOTIFY pruub 160.703674316s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:54 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 107 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=106) [0] r=0 lpr=107 pi=[75,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 09 16:06:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 09 16:06:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 09 16:06:54 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 09 16:06:54 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 108 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[75,108)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:54 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 108 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[75,108)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:54 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 108 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=75/76 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=108) [0]/[2] r=0 lpr=108 pi=[75,108)/2 crt=46'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:54 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 108 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=75/76 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=108) [0]/[2] r=0 lpr=108 pi=[75,108)/2 crt=46'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:54 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 108 pg[9.15( v 46'483 (0'0,46'483] local-lis/les=107/108 n=6 ec=56/40 lis/c=105/63 les/c/f=106/64/0 sis=107) [1] r=0 lpr=107 pi=[63,107)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:54 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 09 16:06:54 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 09 16:06:54 compute-0 ceph-mon[75222]: 5.e scrub starts
Dec 09 16:06:54 compute-0 ceph-mon[75222]: 5.e scrub ok
Dec 09 16:06:54 compute-0 ceph-mon[75222]: 8.5 scrub starts
Dec 09 16:06:54 compute-0 ceph-mon[75222]: 8.5 scrub ok
Dec 09 16:06:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 09 16:06:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 09 16:06:54 compute-0 ceph-mon[75222]: osdmap e108: 3 total, 3 up, 3 in
Dec 09 16:06:54 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 09 16:06:54 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 09 16:06:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 09 16:06:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 09 16:06:55 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 09 16:06:55 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 109 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=108/109 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[75,108)/2 crt=46'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:55 compute-0 ceph-mon[75222]: pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Dec 09 16:06:55 compute-0 ceph-mon[75222]: 5.d scrub starts
Dec 09 16:06:55 compute-0 ceph-mon[75222]: 5.d scrub ok
Dec 09 16:06:55 compute-0 ceph-mon[75222]: 11.7 scrub starts
Dec 09 16:06:55 compute-0 ceph-mon[75222]: 11.7 scrub ok
Dec 09 16:06:55 compute-0 ceph-mon[75222]: osdmap e109: 3 total, 3 up, 3 in
Dec 09 16:06:55 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 09 16:06:55 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 09 16:06:55 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 2 objects/s recovering
Dec 09 16:06:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 09 16:06:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 09 16:06:56 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 09 16:06:56 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 110 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=108/75 les/c/f=109/76/0 sis=110) [0] r=0 lpr=110 pi=[75,110)/2 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:56 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 110 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=108/75 les/c/f=109/76/0 sis=110) [0] r=0 lpr=110 pi=[75,110)/2 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:06:56 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 110 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=108/109 n=6 ec=56/40 lis/c=108/75 les/c/f=109/76/0 sis=110 pruub=15.102726936s) [0] async=[0] r=-1 lpr=110 pi=[75,110)/2 crt=46'483 active pruub 169.440460205s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:06:56 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 110 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=108/109 n=6 ec=56/40 lis/c=108/75 les/c/f=109/76/0 sis=110 pruub=15.102643013s) [0] r=-1 lpr=110 pi=[75,110)/2 crt=46'483 unknown NOTIFY pruub 169.440460205s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:06:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:06:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:06:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:06:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:06:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:06:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:06:56 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 09 16:06:56 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 09 16:06:56 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 09 16:06:56 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 09 16:06:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 09 16:06:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 09 16:06:57 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 09 16:06:57 compute-0 ceph-mon[75222]: 7.b scrub starts
Dec 09 16:06:57 compute-0 ceph-mon[75222]: 7.b scrub ok
Dec 09 16:06:57 compute-0 ceph-mon[75222]: pgmap v205: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 2 objects/s recovering
Dec 09 16:06:57 compute-0 ceph-mon[75222]: osdmap e110: 3 total, 3 up, 3 in
Dec 09 16:06:57 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 111 pg[9.16( v 46'483 (0'0,46'483] local-lis/les=110/111 n=6 ec=56/40 lis/c=108/75 les/c/f=109/76/0 sis=110) [0] r=0 lpr=110 pi=[75,110)/2 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:06:57 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 09 16:06:57 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 09 16:06:57 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec 09 16:06:58 compute-0 ceph-mon[75222]: 7.4 scrub starts
Dec 09 16:06:58 compute-0 ceph-mon[75222]: 7.4 scrub ok
Dec 09 16:06:58 compute-0 ceph-mon[75222]: 7.14 scrub starts
Dec 09 16:06:58 compute-0 ceph-mon[75222]: 7.14 scrub ok
Dec 09 16:06:58 compute-0 ceph-mon[75222]: osdmap e111: 3 total, 3 up, 3 in
Dec 09 16:06:58 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 09 16:06:58 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 09 16:06:58 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 09 16:06:58 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 09 16:06:58 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 09 16:06:58 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 09 16:06:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:06:59 compute-0 ceph-mon[75222]: 3.10 scrub starts
Dec 09 16:06:59 compute-0 ceph-mon[75222]: 3.10 scrub ok
Dec 09 16:06:59 compute-0 ceph-mon[75222]: pgmap v208: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec 09 16:06:59 compute-0 ceph-mon[75222]: 5.1c scrub starts
Dec 09 16:06:59 compute-0 ceph-mon[75222]: 5.1c scrub ok
Dec 09 16:06:59 compute-0 ceph-mon[75222]: 11.4 scrub starts
Dec 09 16:06:59 compute-0 ceph-mon[75222]: 11.4 scrub ok
Dec 09 16:06:59 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:06:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 09 16:06:59 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 09 16:07:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 09 16:07:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 09 16:07:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 09 16:07:00 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 09 16:07:00 compute-0 ceph-mon[75222]: 7.16 scrub starts
Dec 09 16:07:00 compute-0 ceph-mon[75222]: 7.16 scrub ok
Dec 09 16:07:00 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 09 16:07:01 compute-0 ceph-mon[75222]: pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 09 16:07:01 compute-0 ceph-mon[75222]: osdmap e112: 3 total, 3 up, 3 in
Dec 09 16:07:01 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 09 16:07:01 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 09 16:07:01 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 09 16:07:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 09 16:07:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 09 16:07:02 compute-0 python3.9[100851]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:07:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 09 16:07:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 09 16:07:02 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 09 16:07:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 09 16:07:02 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 09 16:07:02 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 09 16:07:02 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 09 16:07:02 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 09 16:07:02 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 09 16:07:02 compute-0 python3.9[101001]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:07:03 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 09 16:07:03 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 09 16:07:03 compute-0 ceph-mon[75222]: 8.19 scrub starts
Dec 09 16:07:03 compute-0 ceph-mon[75222]: 8.19 scrub ok
Dec 09 16:07:03 compute-0 ceph-mon[75222]: pgmap v211: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 09 16:07:03 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 09 16:07:03 compute-0 ceph-mon[75222]: osdmap e113: 3 total, 3 up, 3 in
Dec 09 16:07:03 compute-0 ceph-mon[75222]: 5.1b scrub starts
Dec 09 16:07:03 compute-0 ceph-mon[75222]: 5.1b scrub ok
Dec 09 16:07:03 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 09 16:07:03 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 09 16:07:03 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Dec 09 16:07:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 09 16:07:03 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 09 16:07:04 compute-0 python3.9[101155]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:07:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 09 16:07:04 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 09 16:07:04 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 09 16:07:04 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 09 16:07:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 09 16:07:04 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 09 16:07:04 compute-0 ceph-mon[75222]: 3.13 scrub starts
Dec 09 16:07:04 compute-0 ceph-mon[75222]: 3.13 scrub ok
Dec 09 16:07:04 compute-0 ceph-mon[75222]: 5.7 scrub starts
Dec 09 16:07:04 compute-0 ceph-mon[75222]: 5.7 scrub ok
Dec 09 16:07:04 compute-0 ceph-mon[75222]: 7.1c scrub starts
Dec 09 16:07:04 compute-0 ceph-mon[75222]: 7.1c scrub ok
Dec 09 16:07:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 09 16:07:04 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 09 16:07:04 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 09 16:07:04 compute-0 sudo[101311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxhaoclxrcjpvfmuoyiwpnrocltepdng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296424.566681-125-273988619757485/AnsiballZ_setup.py'
Dec 09 16:07:04 compute-0 sudo[101311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:05 compute-0 python3.9[101313]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:07:05 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 113 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=64/65 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=113 pruub=14.020641327s) [2] r=-1 lpr=113 pi=[64,113)/1 crt=71'486 lcod 71'486 active pruub 186.037353516s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:05 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 113 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=64/65 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=113 pruub=14.020546913s) [2] r=-1 lpr=113 pi=[64,113)/1 crt=71'486 lcod 71'486 unknown NOTIFY pruub 186.037353516s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 113 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=113) [2] r=0 lpr=113 pi=[64,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:05 compute-0 sudo[101311]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 09 16:07:05 compute-0 ceph-mon[75222]: pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Dec 09 16:07:05 compute-0 ceph-mon[75222]: 3.1 scrub starts
Dec 09 16:07:05 compute-0 ceph-mon[75222]: 3.1 scrub ok
Dec 09 16:07:05 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 09 16:07:05 compute-0 ceph-mon[75222]: osdmap e114: 3 total, 3 up, 3 in
Dec 09 16:07:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 09 16:07:05 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 09 16:07:05 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 115 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=64/65 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=115) [2]/[0] r=0 lpr=115 pi=[64,115)/1 crt=71'486 lcod 71'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:05 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 115 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=64/65 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=115) [2]/[0] r=0 lpr=115 pi=[64,115)/1 crt=71'486 lcod 71'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=115) [2]/[0] r=-1 lpr=115 pi=[64,115)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:05 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=115) [2]/[0] r=-1 lpr=115 pi=[64,115)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:05 compute-0 sudo[101395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpmczepmcextofohdlxvtqxvzzfknunf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296424.566681-125-273988619757485/AnsiballZ_dnf.py'
Dec 09 16:07:05 compute-0 sudo[101395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:05 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 09 16:07:05 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 09 16:07:06 compute-0 python3.9[101397]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:07:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 09 16:07:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 09 16:07:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 09 16:07:06 compute-0 ceph-mon[75222]: 7.17 scrub starts
Dec 09 16:07:06 compute-0 ceph-mon[75222]: 7.17 scrub ok
Dec 09 16:07:06 compute-0 ceph-mon[75222]: osdmap e115: 3 total, 3 up, 3 in
Dec 09 16:07:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 09 16:07:06 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 09 16:07:06 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 116 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=115/116 n=6 ec=56/40 lis/c=64/64 les/c/f=65/65/0 sis=115) [2]/[0] async=[2] r=0 lpr=115 pi=[64,115)/1 crt=71'487 lcod 71'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:07:07 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 09 16:07:07 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 09 16:07:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 09 16:07:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 09 16:07:07 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 09 16:07:07 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 117 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=0/0 n=6 ec=56/40 lis/c=115/64 les/c/f=116/65/0 sis=117) [2] r=0 lpr=117 pi=[64,117)/1 pct=0'0 crt=71'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:07 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 117 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=0/0 n=6 ec=56/40 lis/c=115/64 les/c/f=116/65/0 sis=117) [2] r=0 lpr=117 pi=[64,117)/1 crt=71'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:07 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 117 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=115/116 n=6 ec=56/40 lis/c=115/64 les/c/f=116/65/0 sis=117 pruub=14.980715752s) [2] async=[2] r=-1 lpr=117 pi=[64,117)/1 crt=71'487 lcod 71'486 active pruub 189.299911499s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:07 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 117 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=115/116 n=6 ec=56/40 lis/c=115/64 les/c/f=116/65/0 sis=117 pruub=14.980618477s) [2] r=-1 lpr=117 pi=[64,117)/1 crt=71'487 lcod 71'486 unknown NOTIFY pruub 189.299911499s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:07 compute-0 ceph-mon[75222]: pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:07 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 09 16:07:07 compute-0 ceph-mon[75222]: osdmap e116: 3 total, 3 up, 3 in
Dec 09 16:07:07 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 09 16:07:07 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 09 16:07:07 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 09 16:07:07 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 09 16:07:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 09 16:07:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 09 16:07:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 09 16:07:08 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 09 16:07:08 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 118 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=92/93 n=6 ec=56/40 lis/c=92/92 les/c/f=93/93/0 sis=118 pruub=9.703422546s) [0] r=-1 lpr=118 pi=[92,118)/1 crt=71'487 active pruub 176.322128296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:08 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 118 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=92/93 n=6 ec=56/40 lis/c=92/92 les/c/f=93/93/0 sis=118 pruub=9.703227043s) [0] r=-1 lpr=118 pi=[92,118)/1 crt=71'487 unknown NOTIFY pruub 176.322128296s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:08 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 118 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=92/92 les/c/f=93/93/0 sis=118) [0] r=0 lpr=118 pi=[92,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:08 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 118 pg[9.19( v 71'487 (0'0,71'487] local-lis/les=117/118 n=6 ec=56/40 lis/c=115/64 les/c/f=116/65/0 sis=117) [2] r=0 lpr=117 pi=[64,117)/1 crt=71'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:07:08 compute-0 ceph-mon[75222]: 8.b scrub starts
Dec 09 16:07:08 compute-0 ceph-mon[75222]: 8.b scrub ok
Dec 09 16:07:08 compute-0 ceph-mon[75222]: osdmap e117: 3 total, 3 up, 3 in
Dec 09 16:07:08 compute-0 ceph-mon[75222]: 3.14 scrub starts
Dec 09 16:07:08 compute-0 ceph-mon[75222]: 3.14 scrub ok
Dec 09 16:07:08 compute-0 ceph-mon[75222]: pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:08 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 09 16:07:08 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 09 16:07:08 compute-0 ceph-mon[75222]: osdmap e118: 3 total, 3 up, 3 in
Dec 09 16:07:08 compute-0 sshd-session[101447]: Invalid user dspace from 146.190.31.45 port 46580
Dec 09 16:07:08 compute-0 sshd-session[101447]: Connection closed by invalid user dspace 146.190.31.45 port 46580 [preauth]
Dec 09 16:07:09 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 09 16:07:09 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 09 16:07:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 09 16:07:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 09 16:07:09 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 09 16:07:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=92/92 les/c/f=93/93/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[92,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:09 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=92/92 les/c/f=93/93/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[92,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:09 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 119 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=92/93 n=6 ec=56/40 lis/c=92/92 les/c/f=93/93/0 sis=119) [0]/[2] r=0 lpr=119 pi=[92,119)/1 crt=71'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:09 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 119 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=92/93 n=6 ec=56/40 lis/c=92/92 les/c/f=93/93/0 sis=119) [0]/[2] r=0 lpr=119 pi=[92,119)/1 crt=71'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:09 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 09 16:07:09 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 09 16:07:09 compute-0 ceph-mon[75222]: 10.4 scrub starts
Dec 09 16:07:09 compute-0 ceph-mon[75222]: 10.4 scrub ok
Dec 09 16:07:09 compute-0 ceph-mon[75222]: osdmap e119: 3 total, 3 up, 3 in
Dec 09 16:07:09 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 09 16:07:09 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 09 16:07:09 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Dec 09 16:07:10 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 09 16:07:10 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 09 16:07:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 09 16:07:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 09 16:07:10 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 09 16:07:10 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 120 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=119/120 n=6 ec=56/40 lis/c=92/92 les/c/f=93/93/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[92,119)/1 crt=71'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:07:10 compute-0 ceph-mon[75222]: 11.2 scrub starts
Dec 09 16:07:10 compute-0 ceph-mon[75222]: 11.2 scrub ok
Dec 09 16:07:10 compute-0 ceph-mon[75222]: 11.1d scrub starts
Dec 09 16:07:10 compute-0 ceph-mon[75222]: 11.1d scrub ok
Dec 09 16:07:10 compute-0 ceph-mon[75222]: pgmap v222: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Dec 09 16:07:10 compute-0 ceph-mon[75222]: 7.9 scrub starts
Dec 09 16:07:10 compute-0 ceph-mon[75222]: 7.9 scrub ok
Dec 09 16:07:10 compute-0 ceph-mon[75222]: osdmap e120: 3 total, 3 up, 3 in
Dec 09 16:07:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 09 16:07:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 09 16:07:11 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 09 16:07:11 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 121 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=119/120 n=6 ec=56/40 lis/c=119/92 les/c/f=120/93/0 sis=121 pruub=14.997596741s) [0] async=[0] r=-1 lpr=121 pi=[92,121)/1 crt=71'487 active pruub 184.364532471s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:11 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 121 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=119/120 n=6 ec=56/40 lis/c=119/92 les/c/f=120/93/0 sis=121 pruub=14.997526169s) [0] r=-1 lpr=121 pi=[92,121)/1 crt=71'487 unknown NOTIFY pruub 184.364532471s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:11 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 121 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=0/0 n=6 ec=56/40 lis/c=119/92 les/c/f=120/93/0 sis=121) [0] r=0 lpr=121 pi=[92,121)/1 pct=0'0 crt=71'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:11 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 121 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=0/0 n=6 ec=56/40 lis/c=119/92 les/c/f=120/93/0 sis=121) [0] r=0 lpr=121 pi=[92,121)/1 crt=71'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:11 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Dec 09 16:07:11 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 09 16:07:11 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 09 16:07:12 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 09 16:07:12 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 09 16:07:12 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 09 16:07:12 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 09 16:07:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 09 16:07:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 09 16:07:12 compute-0 ceph-mon[75222]: osdmap e121: 3 total, 3 up, 3 in
Dec 09 16:07:12 compute-0 ceph-mon[75222]: 5.4 scrub starts
Dec 09 16:07:12 compute-0 ceph-mon[75222]: 5.4 scrub ok
Dec 09 16:07:12 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 09 16:07:12 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 122 pg[9.1c( v 71'487 (0'0,71'487] local-lis/les=121/122 n=6 ec=56/40 lis/c=119/92 les/c/f=120/93/0 sis=121) [0] r=0 lpr=121 pi=[92,121)/1 crt=71'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:07:12 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 09 16:07:12 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 09 16:07:13 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 09 16:07:13 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 09 16:07:13 compute-0 ceph-mon[75222]: pgmap v225: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Dec 09 16:07:13 compute-0 ceph-mon[75222]: 7.10 scrub starts
Dec 09 16:07:13 compute-0 ceph-mon[75222]: 7.10 scrub ok
Dec 09 16:07:13 compute-0 ceph-mon[75222]: 7.e scrub starts
Dec 09 16:07:13 compute-0 ceph-mon[75222]: 7.e scrub ok
Dec 09 16:07:13 compute-0 ceph-mon[75222]: osdmap e122: 3 total, 3 up, 3 in
Dec 09 16:07:13 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 09 16:07:13 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 09 16:07:13 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 09 16:07:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 09 16:07:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 09 16:07:14 compute-0 ceph-mon[75222]: 8.1e scrub starts
Dec 09 16:07:14 compute-0 ceph-mon[75222]: 8.1e scrub ok
Dec 09 16:07:14 compute-0 ceph-mon[75222]: 11.9 scrub starts
Dec 09 16:07:14 compute-0 ceph-mon[75222]: 11.9 scrub ok
Dec 09 16:07:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 09 16:07:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 09 16:07:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 09 16:07:14 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 09 16:07:14 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 09 16:07:14 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 09 16:07:15 compute-0 ceph-mon[75222]: 7.12 scrub starts
Dec 09 16:07:15 compute-0 ceph-mon[75222]: 7.12 scrub ok
Dec 09 16:07:15 compute-0 ceph-mon[75222]: pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:15 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 09 16:07:15 compute-0 ceph-mon[75222]: osdmap e123: 3 total, 3 up, 3 in
Dec 09 16:07:15 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Dec 09 16:07:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 09 16:07:15 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 09 16:07:15 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 09 16:07:15 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 09 16:07:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 09 16:07:16 compute-0 ceph-mon[75222]: 5.19 scrub starts
Dec 09 16:07:16 compute-0 ceph-mon[75222]: 5.19 scrub ok
Dec 09 16:07:16 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 09 16:07:16 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 09 16:07:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 09 16:07:16 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 09 16:07:16 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 124 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=75/76 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=124 pruub=10.285429001s) [0] r=-1 lpr=124 pi=[75,124)/1 crt=71'485 active pruub 184.704376221s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:16 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 124 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=75/76 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=124 pruub=10.285389900s) [0] r=-1 lpr=124 pi=[75,124)/1 crt=71'485 unknown NOTIFY pruub 184.704376221s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:16 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 124 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=124) [0] r=0 lpr=124 pi=[75,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:16 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 09 16:07:16 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 09 16:07:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 09 16:07:17 compute-0 ceph-mon[75222]: pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Dec 09 16:07:17 compute-0 ceph-mon[75222]: 5.18 scrub starts
Dec 09 16:07:17 compute-0 ceph-mon[75222]: 5.18 scrub ok
Dec 09 16:07:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 09 16:07:17 compute-0 ceph-mon[75222]: osdmap e124: 3 total, 3 up, 3 in
Dec 09 16:07:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 09 16:07:17 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 09 16:07:17 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 125 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[75,125)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:17 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 125 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[75,125)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:17 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 125 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=75/76 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=125) [0]/[2] r=0 lpr=125 pi=[75,125)/1 crt=71'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:17 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 125 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=75/76 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=125) [0]/[2] r=0 lpr=125 pi=[75,125)/1 crt=71'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:17 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Dec 09 16:07:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 16:07:17 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:07:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 09 16:07:18 compute-0 ceph-mon[75222]: 5.1d scrub starts
Dec 09 16:07:18 compute-0 ceph-mon[75222]: 5.1d scrub ok
Dec 09 16:07:18 compute-0 ceph-mon[75222]: osdmap e125: 3 total, 3 up, 3 in
Dec 09 16:07:18 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 09 16:07:18 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:07:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 09 16:07:18 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 09 16:07:18 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 126 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=80/81 n=6 ec=56/40 lis/c=80/80 les/c/f=81/81/0 sis=126 pruub=15.544500351s) [1] r=-1 lpr=126 pi=[80,126)/1 crt=46'483 active pruub 191.991699219s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:18 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 126 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=80/81 n=6 ec=56/40 lis/c=80/80 les/c/f=81/81/0 sis=126 pruub=15.544463158s) [1] r=-1 lpr=126 pi=[80,126)/1 crt=46'483 unknown NOTIFY pruub 191.991699219s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:18 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 126 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=125/126 n=6 ec=56/40 lis/c=75/75 les/c/f=76/76/0 sis=125) [0]/[2] async=[0] r=0 lpr=125 pi=[75,125)/1 crt=71'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:07:18 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 126 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=80/80 les/c/f=81/81/0 sis=126) [1] r=0 lpr=126 pi=[80,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:18 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 09 16:07:18 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 09 16:07:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 09 16:07:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 09 16:07:19 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 127 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=80/81 n=6 ec=56/40 lis/c=80/80 les/c/f=81/81/0 sis=127) [1]/[2] r=0 lpr=127 pi=[80,127)/1 crt=46'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:19 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 127 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=80/81 n=6 ec=56/40 lis/c=80/80 les/c/f=81/81/0 sis=127) [1]/[2] r=0 lpr=127 pi=[80,127)/1 crt=46'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:19 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 127 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=125/126 n=6 ec=56/40 lis/c=125/75 les/c/f=126/76/0 sis=127 pruub=15.106325150s) [0] async=[0] r=-1 lpr=127 pi=[75,127)/1 crt=71'485 active pruub 192.449157715s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:19 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 09 16:07:19 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 127 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=125/126 n=6 ec=56/40 lis/c=125/75 les/c/f=126/76/0 sis=127 pruub=15.106210709s) [0] r=-1 lpr=127 pi=[75,127)/1 crt=71'485 unknown NOTIFY pruub 192.449157715s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:19 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 127 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=0/0 n=6 ec=56/40 lis/c=125/75 les/c/f=126/76/0 sis=127) [0] r=0 lpr=127 pi=[75,127)/1 pct=0'0 crt=71'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:19 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 127 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=0/0 n=6 ec=56/40 lis/c=125/75 les/c/f=126/76/0 sis=127) [0] r=0 lpr=127 pi=[75,127)/1 crt=71'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:19 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=80/80 les/c/f=81/81/0 sis=127) [1]/[2] r=-1 lpr=127 pi=[80,127)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:19 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=80/80 les/c/f=81/81/0 sis=127) [1]/[2] r=-1 lpr=127 pi=[80,127)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:19 compute-0 ceph-mon[75222]: pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Dec 09 16:07:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 16:07:19 compute-0 ceph-mon[75222]: osdmap e126: 3 total, 3 up, 3 in
Dec 09 16:07:19 compute-0 ceph-mon[75222]: osdmap e127: 3 total, 3 up, 3 in
Dec 09 16:07:19 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:19 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 09 16:07:19 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 09 16:07:20 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 09 16:07:20 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 09 16:07:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 09 16:07:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 09 16:07:20 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 09 16:07:20 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 128 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=127/128 n=6 ec=56/40 lis/c=80/80 les/c/f=81/81/0 sis=127) [1]/[2] async=[1] r=0 lpr=127 pi=[80,127)/1 crt=46'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:07:20 compute-0 ceph-osd[86013]: osd.0 pg_epoch: 128 pg[9.1e( v 71'485 (0'0,71'485] local-lis/les=127/128 n=6 ec=56/40 lis/c=125/75 les/c/f=126/76/0 sis=127) [0] r=0 lpr=127 pi=[75,127)/1 crt=71'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:07:20 compute-0 ceph-mon[75222]: 10.10 scrub starts
Dec 09 16:07:20 compute-0 ceph-mon[75222]: 10.10 scrub ok
Dec 09 16:07:20 compute-0 ceph-mon[75222]: osdmap e128: 3 total, 3 up, 3 in
Dec 09 16:07:20 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 09 16:07:20 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 09 16:07:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 09 16:07:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 09 16:07:21 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 09 16:07:21 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 129 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=127/128 n=6 ec=56/40 lis/c=127/80 les/c/f=128/81/0 sis=129 pruub=15.005140305s) [1] async=[1] r=-1 lpr=129 pi=[80,129)/1 crt=46'483 active pruub 194.359207153s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:21 compute-0 ceph-osd[88099]: osd.2 pg_epoch: 129 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=127/128 n=6 ec=56/40 lis/c=127/80 les/c/f=128/81/0 sis=129 pruub=15.004867554s) [1] r=-1 lpr=129 pi=[80,129)/1 crt=46'483 unknown NOTIFY pruub 194.359207153s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 16:07:21 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 129 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=127/80 les/c/f=128/81/0 sis=129) [1] r=0 lpr=129 pi=[80,129)/1 pct=0'0 crt=46'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 09 16:07:21 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 129 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=0/0 n=6 ec=56/40 lis/c=127/80 les/c/f=128/81/0 sis=129) [1] r=0 lpr=129 pi=[80,129)/1 crt=46'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 16:07:21 compute-0 ceph-mon[75222]: pgmap v235: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:21 compute-0 ceph-mon[75222]: 2.1b scrub starts
Dec 09 16:07:21 compute-0 ceph-mon[75222]: 2.1b scrub ok
Dec 09 16:07:21 compute-0 ceph-mon[75222]: 7.5 scrub starts
Dec 09 16:07:21 compute-0 ceph-mon[75222]: 7.5 scrub ok
Dec 09 16:07:21 compute-0 ceph-mon[75222]: osdmap e129: 3 total, 3 up, 3 in
Dec 09 16:07:21 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 09 16:07:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 09 16:07:22 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 09 16:07:22 compute-0 ceph-osd[87055]: osd.1 pg_epoch: 130 pg[9.1f( v 46'483 (0'0,46'483] local-lis/les=129/130 n=6 ec=56/40 lis/c=127/80 les/c/f=128/81/0 sis=129) [1] r=0 lpr=129 pi=[80,129)/1 crt=46'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 16:07:22 compute-0 ceph-mon[75222]: 5.1a scrub starts
Dec 09 16:07:22 compute-0 ceph-mon[75222]: 5.1a scrub ok
Dec 09 16:07:22 compute-0 ceph-mon[75222]: osdmap e130: 3 total, 3 up, 3 in
Dec 09 16:07:23 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 09 16:07:23 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 09 16:07:23 compute-0 ceph-mon[75222]: pgmap v238: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:23 compute-0 ceph-mon[75222]: 10.8 scrub starts
Dec 09 16:07:23 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 1 objects/s recovering
Dec 09 16:07:24 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 09 16:07:24 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 09 16:07:24 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 09 16:07:24 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 09 16:07:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 09 16:07:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 09 16:07:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:24 compute-0 ceph-mon[75222]: 10.8 scrub ok
Dec 09 16:07:24 compute-0 ceph-mon[75222]: 2.1c scrub starts
Dec 09 16:07:24 compute-0 ceph-mon[75222]: 2.1c scrub ok
Dec 09 16:07:25 compute-0 ceph-mon[75222]: pgmap v240: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 1 objects/s recovering
Dec 09 16:07:25 compute-0 ceph-mon[75222]: 10.11 scrub starts
Dec 09 16:07:25 compute-0 ceph-mon[75222]: 10.11 scrub ok
Dec 09 16:07:25 compute-0 ceph-mon[75222]: 8.d scrub starts
Dec 09 16:07:25 compute-0 ceph-mon[75222]: 8.d scrub ok
Dec 09 16:07:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:07:25
Dec 09 16:07:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:07:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Dec 09 16:07:25 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 80 B/s, 3 objects/s recovering
Dec 09 16:07:26 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 09 16:07:26 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 09 16:07:26 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 09 16:07:26 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:07:26 compute-0 ceph-mon[75222]: 2.f scrub starts
Dec 09 16:07:26 compute-0 ceph-mon[75222]: 2.f scrub ok
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:07:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:07:26 compute-0 sudo[101519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:07:26 compute-0 sudo[101519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:26 compute-0 sudo[101519]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:26 compute-0 sudo[101544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:07:26 compute-0 sudo[101544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:27 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 09 16:07:27 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 09 16:07:27 compute-0 sudo[101544]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:07:27 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:07:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:07:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:07:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:07:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:07:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:07:27 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:07:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:07:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:07:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:07:27 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:07:27 compute-0 sudo[101600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:07:27 compute-0 sudo[101600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:27 compute-0 ceph-mon[75222]: pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 80 B/s, 3 objects/s recovering
Dec 09 16:07:27 compute-0 ceph-mon[75222]: 3.5 scrub starts
Dec 09 16:07:27 compute-0 ceph-mon[75222]: 3.5 scrub ok
Dec 09 16:07:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:07:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:07:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:07:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:07:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:07:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:07:27 compute-0 sudo[101600]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:27 compute-0 sudo[101625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:07:27 compute-0 sudo[101625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:27 compute-0 podman[101663]: 2025-12-09 16:07:27.868463028 +0000 UTC m=+0.043513109 container create f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 09 16:07:27 compute-0 systemd[1]: Started libpod-conmon-f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b.scope.
Dec 09 16:07:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:07:27 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 270 B/s wr, 5 op/s; 43 B/s, 1 objects/s recovering
Dec 09 16:07:27 compute-0 podman[101663]: 2025-12-09 16:07:27.939910065 +0000 UTC m=+0.114960146 container init f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:07:27 compute-0 podman[101663]: 2025-12-09 16:07:27.849939558 +0000 UTC m=+0.024989659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:07:27 compute-0 podman[101663]: 2025-12-09 16:07:27.953450967 +0000 UTC m=+0.128501048 container start f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:07:27 compute-0 podman[101663]: 2025-12-09 16:07:27.956646455 +0000 UTC m=+0.131696566 container attach f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:07:27 compute-0 loving_kirch[101679]: 167 167
Dec 09 16:07:27 compute-0 systemd[1]: libpod-f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b.scope: Deactivated successfully.
Dec 09 16:07:27 compute-0 podman[101663]: 2025-12-09 16:07:27.964519662 +0000 UTC m=+0.139569773 container died f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bcd3208b4cc92aca0e607ce8791b2b251ea81341ee73d824c96b58d39eccce8-merged.mount: Deactivated successfully.
Dec 09 16:07:28 compute-0 podman[101663]: 2025-12-09 16:07:28.005790858 +0000 UTC m=+0.180840929 container remove f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:07:28 compute-0 systemd[1]: libpod-conmon-f0567bfc45f5aa987932fda01debd30397fa47ef05c3ce421bd827047b688c1b.scope: Deactivated successfully.
Dec 09 16:07:28 compute-0 podman[101703]: 2025-12-09 16:07:28.19111685 +0000 UTC m=+0.050748228 container create e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_wing, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:07:28 compute-0 systemd[1]: Started libpod-conmon-e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9.scope.
Dec 09 16:07:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85969b442f57369315b7ac91032fb8a73999b43d58fe62581e078c86d76d890e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85969b442f57369315b7ac91032fb8a73999b43d58fe62581e078c86d76d890e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85969b442f57369315b7ac91032fb8a73999b43d58fe62581e078c86d76d890e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85969b442f57369315b7ac91032fb8a73999b43d58fe62581e078c86d76d890e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85969b442f57369315b7ac91032fb8a73999b43d58fe62581e078c86d76d890e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:28 compute-0 podman[101703]: 2025-12-09 16:07:28.168153868 +0000 UTC m=+0.027785276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:07:28 compute-0 podman[101703]: 2025-12-09 16:07:28.27902506 +0000 UTC m=+0.138656438 container init e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 09 16:07:28 compute-0 podman[101703]: 2025-12-09 16:07:28.28449914 +0000 UTC m=+0.144130498 container start e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 09 16:07:28 compute-0 podman[101703]: 2025-12-09 16:07:28.287631227 +0000 UTC m=+0.147262585 container attach e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:07:28 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 09 16:07:28 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 09 16:07:28 compute-0 ceph-mon[75222]: 5.1 scrub starts
Dec 09 16:07:28 compute-0 ceph-mon[75222]: 5.1 scrub ok
Dec 09 16:07:28 compute-0 ceph-mon[75222]: 10.7 scrub starts
Dec 09 16:07:28 compute-0 ceph-mon[75222]: 10.7 scrub ok
Dec 09 16:07:28 compute-0 hardcore_wing[101720]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:07:28 compute-0 hardcore_wing[101720]: --> All data devices are unavailable
Dec 09 16:07:28 compute-0 systemd[1]: libpod-e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9.scope: Deactivated successfully.
Dec 09 16:07:28 compute-0 podman[101703]: 2025-12-09 16:07:28.781529251 +0000 UTC m=+0.641160609 container died e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 09 16:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-85969b442f57369315b7ac91032fb8a73999b43d58fe62581e078c86d76d890e-merged.mount: Deactivated successfully.
Dec 09 16:07:28 compute-0 podman[101703]: 2025-12-09 16:07:28.842851559 +0000 UTC m=+0.702482947 container remove e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_wing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 09 16:07:28 compute-0 systemd[1]: libpod-conmon-e4e0e801be725505fe34b41c55450cac9418c0e371e1d0ec600761ecea1208e9.scope: Deactivated successfully.
Dec 09 16:07:28 compute-0 sudo[101625]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:28 compute-0 sudo[101753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:07:28 compute-0 sudo[101753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:28 compute-0 sudo[101753]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:29 compute-0 sudo[101778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:07:29 compute-0 sudo[101778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:29 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 09 16:07:29 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 09 16:07:29 compute-0 podman[101813]: 2025-12-09 16:07:29.289341839 +0000 UTC m=+0.036082954 container create b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:07:29 compute-0 systemd[1]: Started libpod-conmon-b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f.scope.
Dec 09 16:07:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:07:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:29 compute-0 podman[101813]: 2025-12-09 16:07:29.366235136 +0000 UTC m=+0.112976291 container init b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:07:29 compute-0 podman[101813]: 2025-12-09 16:07:29.273995097 +0000 UTC m=+0.020736232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:07:29 compute-0 podman[101813]: 2025-12-09 16:07:29.372788006 +0000 UTC m=+0.119529131 container start b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:07:29 compute-0 podman[101813]: 2025-12-09 16:07:29.37617338 +0000 UTC m=+0.122914505 container attach b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:07:29 compute-0 distracted_hoover[101830]: 167 167
Dec 09 16:07:29 compute-0 systemd[1]: libpod-b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f.scope: Deactivated successfully.
Dec 09 16:07:29 compute-0 podman[101813]: 2025-12-09 16:07:29.378039261 +0000 UTC m=+0.124780386 container died b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6716cf8defbd92c312843c1c12bc9d71b8a50ea9b5718c9c6de6bfef1254a29d-merged.mount: Deactivated successfully.
Dec 09 16:07:29 compute-0 podman[101813]: 2025-12-09 16:07:29.414611348 +0000 UTC m=+0.161352463 container remove b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:07:29 compute-0 systemd[1]: libpod-conmon-b05cd1f765613d5db8ffb886c9e278b290d76640200f424c92bcac26e501b45f.scope: Deactivated successfully.
Dec 09 16:07:29 compute-0 ceph-mon[75222]: pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 270 B/s wr, 5 op/s; 43 B/s, 1 objects/s recovering
Dec 09 16:07:29 compute-0 podman[101853]: 2025-12-09 16:07:29.592850164 +0000 UTC m=+0.049317248 container create edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:07:29 compute-0 systemd[1]: Started libpod-conmon-edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9.scope.
Dec 09 16:07:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375dc1f516381bac88c775b0efe1a893e3c52b99c6fad6734a35a001f931031f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375dc1f516381bac88c775b0efe1a893e3c52b99c6fad6734a35a001f931031f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375dc1f516381bac88c775b0efe1a893e3c52b99c6fad6734a35a001f931031f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375dc1f516381bac88c775b0efe1a893e3c52b99c6fad6734a35a001f931031f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:29 compute-0 podman[101853]: 2025-12-09 16:07:29.568087222 +0000 UTC m=+0.024554296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:07:29 compute-0 podman[101853]: 2025-12-09 16:07:29.672545438 +0000 UTC m=+0.129012502 container init edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:07:29 compute-0 podman[101853]: 2025-12-09 16:07:29.685948107 +0000 UTC m=+0.142415191 container start edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:07:29 compute-0 podman[101853]: 2025-12-09 16:07:29.690924774 +0000 UTC m=+0.147391868 container attach edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:07:29 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 239 B/s wr, 5 op/s; 38 B/s, 1 objects/s recovering
Dec 09 16:07:29 compute-0 gracious_noether[101871]: {
Dec 09 16:07:29 compute-0 gracious_noether[101871]:     "0": [
Dec 09 16:07:29 compute-0 gracious_noether[101871]:         {
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "devices": [
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "/dev/loop3"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             ],
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_name": "ceph_lv0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_size": "21470642176",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "name": "ceph_lv0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "tags": {
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cluster_name": "ceph",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.crush_device_class": "",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.encrypted": "0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.objectstore": "bluestore",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osd_id": "0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.type": "block",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.vdo": "0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.with_tpm": "0"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             },
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "type": "block",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "vg_name": "ceph_vg0"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:         }
Dec 09 16:07:29 compute-0 gracious_noether[101871]:     ],
Dec 09 16:07:29 compute-0 gracious_noether[101871]:     "1": [
Dec 09 16:07:29 compute-0 gracious_noether[101871]:         {
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "devices": [
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "/dev/loop4"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             ],
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_name": "ceph_lv1",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_size": "21470642176",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "name": "ceph_lv1",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "tags": {
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cluster_name": "ceph",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.crush_device_class": "",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.encrypted": "0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.objectstore": "bluestore",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osd_id": "1",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.type": "block",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.vdo": "0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.with_tpm": "0"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             },
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "type": "block",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "vg_name": "ceph_vg1"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:         }
Dec 09 16:07:29 compute-0 gracious_noether[101871]:     ],
Dec 09 16:07:29 compute-0 gracious_noether[101871]:     "2": [
Dec 09 16:07:29 compute-0 gracious_noether[101871]:         {
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "devices": [
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "/dev/loop5"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             ],
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_name": "ceph_lv2",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_size": "21470642176",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "name": "ceph_lv2",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "tags": {
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.cluster_name": "ceph",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.crush_device_class": "",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.encrypted": "0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.objectstore": "bluestore",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osd_id": "2",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.type": "block",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.vdo": "0",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:                 "ceph.with_tpm": "0"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             },
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "type": "block",
Dec 09 16:07:29 compute-0 gracious_noether[101871]:             "vg_name": "ceph_vg2"
Dec 09 16:07:29 compute-0 gracious_noether[101871]:         }
Dec 09 16:07:29 compute-0 gracious_noether[101871]:     ]
Dec 09 16:07:29 compute-0 gracious_noether[101871]: }
Dec 09 16:07:30 compute-0 systemd[1]: libpod-edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9.scope: Deactivated successfully.
Dec 09 16:07:30 compute-0 podman[101853]: 2025-12-09 16:07:30.017455862 +0000 UTC m=+0.473922946 container died edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-375dc1f516381bac88c775b0efe1a893e3c52b99c6fad6734a35a001f931031f-merged.mount: Deactivated successfully.
Dec 09 16:07:30 compute-0 podman[101853]: 2025-12-09 16:07:30.067751357 +0000 UTC m=+0.524218411 container remove edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:07:30 compute-0 systemd[1]: libpod-conmon-edb2ad3cfb74b2f51c9a9f38c7009035cc206964f5caa77edbd1875d5b4c11a9.scope: Deactivated successfully.
Dec 09 16:07:30 compute-0 sudo[101778]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:30 compute-0 sudo[101899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:07:30 compute-0 sudo[101899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:30 compute-0 sudo[101899]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:30 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 09 16:07:30 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 09 16:07:30 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 09 16:07:30 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 09 16:07:30 compute-0 sudo[101924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:07:30 compute-0 sudo[101924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:30 compute-0 podman[101962]: 2025-12-09 16:07:30.537962681 +0000 UTC m=+0.047971332 container create fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:07:30 compute-0 ceph-mon[75222]: 11.b scrub starts
Dec 09 16:07:30 compute-0 ceph-mon[75222]: 11.b scrub ok
Dec 09 16:07:30 compute-0 ceph-mon[75222]: 10.13 scrub starts
Dec 09 16:07:30 compute-0 ceph-mon[75222]: 10.13 scrub ok
Dec 09 16:07:30 compute-0 systemd[1]: Started libpod-conmon-fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5.scope.
Dec 09 16:07:30 compute-0 podman[101962]: 2025-12-09 16:07:30.516298394 +0000 UTC m=+0.026307085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:07:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:07:30 compute-0 podman[101962]: 2025-12-09 16:07:30.627857235 +0000 UTC m=+0.137865926 container init fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wescoff, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:07:30 compute-0 podman[101962]: 2025-12-09 16:07:30.635866346 +0000 UTC m=+0.145874997 container start fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:07:30 compute-0 podman[101962]: 2025-12-09 16:07:30.638995552 +0000 UTC m=+0.149004213 container attach fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wescoff, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:07:30 compute-0 quizzical_wescoff[101979]: 167 167
Dec 09 16:07:30 compute-0 systemd[1]: libpod-fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5.scope: Deactivated successfully.
Dec 09 16:07:30 compute-0 podman[101962]: 2025-12-09 16:07:30.641712037 +0000 UTC m=+0.151720688 container died fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-569a09709997661078d1fee33cd1329291176f05baddbd3bff733f04ff7d9a71-merged.mount: Deactivated successfully.
Dec 09 16:07:30 compute-0 podman[101962]: 2025-12-09 16:07:30.683465016 +0000 UTC m=+0.193473667 container remove fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wescoff, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:07:30 compute-0 systemd[1]: libpod-conmon-fca14b0e67142746d5139160283ca3be89dd6497a625b22217f1e71587be79b5.scope: Deactivated successfully.
Dec 09 16:07:30 compute-0 podman[102003]: 2025-12-09 16:07:30.875960355 +0000 UTC m=+0.053277958 container create a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:07:30 compute-0 systemd[1]: Started libpod-conmon-a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d.scope.
Dec 09 16:07:30 compute-0 podman[102003]: 2025-12-09 16:07:30.857109006 +0000 UTC m=+0.034426579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:07:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29fb8f0c0343f34cf4f15abac5f25cd648c0dfa82ae8cdce6f84a873931b74c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29fb8f0c0343f34cf4f15abac5f25cd648c0dfa82ae8cdce6f84a873931b74c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29fb8f0c0343f34cf4f15abac5f25cd648c0dfa82ae8cdce6f84a873931b74c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29fb8f0c0343f34cf4f15abac5f25cd648c0dfa82ae8cdce6f84a873931b74c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:07:30 compute-0 podman[102003]: 2025-12-09 16:07:30.975118814 +0000 UTC m=+0.152436417 container init a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:07:30 compute-0 podman[102003]: 2025-12-09 16:07:30.989277344 +0000 UTC m=+0.166594937 container start a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:07:30 compute-0 podman[102003]: 2025-12-09 16:07:30.994444796 +0000 UTC m=+0.171762369 container attach a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:07:31 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 09 16:07:31 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 09 16:07:31 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 09 16:07:31 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 09 16:07:31 compute-0 ceph-mon[75222]: pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 239 B/s wr, 5 op/s; 38 B/s, 1 objects/s recovering
Dec 09 16:07:31 compute-0 ceph-mon[75222]: 11.11 scrub starts
Dec 09 16:07:31 compute-0 ceph-mon[75222]: 11.11 scrub ok
Dec 09 16:07:31 compute-0 ceph-mon[75222]: 10.f scrub starts
Dec 09 16:07:31 compute-0 ceph-mon[75222]: 10.f scrub ok
Dec 09 16:07:31 compute-0 lvm[102100]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:07:31 compute-0 lvm[102104]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:07:31 compute-0 lvm[102100]: VG ceph_vg0 finished
Dec 09 16:07:31 compute-0 lvm[102104]: VG ceph_vg1 finished
Dec 09 16:07:31 compute-0 lvm[102109]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:07:31 compute-0 lvm[102109]: VG ceph_vg2 finished
Dec 09 16:07:31 compute-0 relaxed_einstein[102020]: {}
Dec 09 16:07:31 compute-0 systemd[1]: libpod-a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d.scope: Deactivated successfully.
Dec 09 16:07:31 compute-0 podman[102003]: 2025-12-09 16:07:31.86934154 +0000 UTC m=+1.046659123 container died a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:07:31 compute-0 systemd[1]: libpod-a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d.scope: Consumed 1.405s CPU time.
Dec 09 16:07:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a29fb8f0c0343f34cf4f15abac5f25cd648c0dfa82ae8cdce6f84a873931b74c-merged.mount: Deactivated successfully.
Dec 09 16:07:31 compute-0 podman[102003]: 2025-12-09 16:07:31.914457062 +0000 UTC m=+1.091774625 container remove a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:07:31 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 204 B/s wr, 4 op/s; 32 B/s, 1 objects/s recovering
Dec 09 16:07:31 compute-0 systemd[1]: libpod-conmon-a1ae42e35c04f2a9bda4229f5da3c2354eca00f829e89e3f921d98781316892d.scope: Deactivated successfully.
Dec 09 16:07:31 compute-0 sudo[101924]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:07:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:07:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:07:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:07:32 compute-0 sudo[102125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:07:32 compute-0 sudo[102125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:07:32 compute-0 sudo[102125]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:32 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 09 16:07:32 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 09 16:07:32 compute-0 ceph-mon[75222]: 7.1 scrub starts
Dec 09 16:07:32 compute-0 ceph-mon[75222]: 7.1 scrub ok
Dec 09 16:07:32 compute-0 ceph-mon[75222]: pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 204 B/s wr, 4 op/s; 32 B/s, 1 objects/s recovering
Dec 09 16:07:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:07:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:07:33 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 09 16:07:33 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 09 16:07:33 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 09 16:07:33 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 09 16:07:33 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 88 B/s wr, 3 op/s; 28 B/s, 1 objects/s recovering
Dec 09 16:07:33 compute-0 ceph-mon[75222]: 11.d scrub starts
Dec 09 16:07:33 compute-0 ceph-mon[75222]: 11.d scrub ok
Dec 09 16:07:33 compute-0 ceph-mon[75222]: 2.4 scrub starts
Dec 09 16:07:33 compute-0 ceph-mon[75222]: 2.4 scrub ok
Dec 09 16:07:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:35 compute-0 ceph-mon[75222]: 7.6 scrub starts
Dec 09 16:07:35 compute-0 ceph-mon[75222]: 7.6 scrub ok
Dec 09 16:07:35 compute-0 ceph-mon[75222]: pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 88 B/s wr, 3 op/s; 28 B/s, 1 objects/s recovering
Dec 09 16:07:35 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 85 B/s wr, 3 op/s; 27 B/s, 1 objects/s recovering
Dec 09 16:07:36 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 09 16:07:36 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 09 16:07:36 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 09 16:07:36 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:07:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:07:37 compute-0 ceph-mon[75222]: pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 85 B/s wr, 3 op/s; 27 B/s, 1 objects/s recovering
Dec 09 16:07:37 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 09 16:07:37 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 09 16:07:37 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:38 compute-0 ceph-mon[75222]: 7.c scrub starts
Dec 09 16:07:38 compute-0 ceph-mon[75222]: 7.c scrub ok
Dec 09 16:07:38 compute-0 ceph-mon[75222]: 2.2 scrub starts
Dec 09 16:07:38 compute-0 ceph-mon[75222]: 2.2 scrub ok
Dec 09 16:07:39 compute-0 ceph-mon[75222]: 3.8 scrub starts
Dec 09 16:07:39 compute-0 ceph-mon[75222]: 3.8 scrub ok
Dec 09 16:07:39 compute-0 ceph-mon[75222]: pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:39 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:40 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 09 16:07:40 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 09 16:07:41 compute-0 ceph-mon[75222]: pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:41 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 09 16:07:41 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 09 16:07:41 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:42 compute-0 ceph-mon[75222]: 8.9 scrub starts
Dec 09 16:07:42 compute-0 ceph-mon[75222]: 8.9 scrub ok
Dec 09 16:07:42 compute-0 ceph-mon[75222]: 4.2 scrub starts
Dec 09 16:07:42 compute-0 ceph-mon[75222]: 4.2 scrub ok
Dec 09 16:07:42 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 09 16:07:42 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 09 16:07:42 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 09 16:07:42 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 09 16:07:43 compute-0 ceph-mon[75222]: pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:43 compute-0 ceph-mon[75222]: 4.4 scrub starts
Dec 09 16:07:43 compute-0 ceph-mon[75222]: 4.4 scrub ok
Dec 09 16:07:43 compute-0 ceph-mon[75222]: 3.7 scrub starts
Dec 09 16:07:43 compute-0 ceph-mon[75222]: 3.7 scrub ok
Dec 09 16:07:43 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 09 16:07:43 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 09 16:07:43 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:44 compute-0 ceph-mon[75222]: 11.3 scrub starts
Dec 09 16:07:44 compute-0 ceph-mon[75222]: 11.3 scrub ok
Dec 09 16:07:44 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 09 16:07:44 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 09 16:07:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:44 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 09 16:07:44 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 09 16:07:45 compute-0 ceph-mon[75222]: pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:45 compute-0 ceph-mon[75222]: 8.11 scrub starts
Dec 09 16:07:45 compute-0 ceph-mon[75222]: 8.11 scrub ok
Dec 09 16:07:45 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 09 16:07:45 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 09 16:07:45 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:46 compute-0 ceph-mon[75222]: 5.5 scrub starts
Dec 09 16:07:46 compute-0 ceph-mon[75222]: 5.5 scrub ok
Dec 09 16:07:46 compute-0 ceph-mon[75222]: 4.f scrub starts
Dec 09 16:07:46 compute-0 ceph-mon[75222]: 4.f scrub ok
Dec 09 16:07:46 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 09 16:07:46 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 09 16:07:47 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 09 16:07:47 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 09 16:07:47 compute-0 ceph-mon[75222]: pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:47 compute-0 ceph-mon[75222]: 11.12 scrub starts
Dec 09 16:07:47 compute-0 ceph-mon[75222]: 11.12 scrub ok
Dec 09 16:07:47 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:48 compute-0 ceph-mon[75222]: 5.c scrub starts
Dec 09 16:07:48 compute-0 ceph-mon[75222]: 5.c scrub ok
Dec 09 16:07:49 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 09 16:07:49 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 09 16:07:49 compute-0 ceph-mon[75222]: pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:49 compute-0 sudo[101395]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:49 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:50 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 09 16:07:50 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 09 16:07:50 compute-0 sudo[102312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsqbextkqbguoxexdxuitcuhgjcewphs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296469.7242906-137-110607553275176/AnsiballZ_command.py'
Dec 09 16:07:50 compute-0 sudo[102312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:50 compute-0 ceph-mon[75222]: 4.d scrub starts
Dec 09 16:07:50 compute-0 ceph-mon[75222]: 4.d scrub ok
Dec 09 16:07:50 compute-0 python3.9[102314]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:07:50 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 09 16:07:50 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 09 16:07:51 compute-0 sudo[102312]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:51 compute-0 ceph-mon[75222]: pgmap v253: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:51 compute-0 ceph-mon[75222]: 10.2 scrub starts
Dec 09 16:07:51 compute-0 ceph-mon[75222]: 10.2 scrub ok
Dec 09 16:07:51 compute-0 ceph-mon[75222]: 3.3 scrub starts
Dec 09 16:07:51 compute-0 ceph-mon[75222]: 3.3 scrub ok
Dec 09 16:07:51 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:51 compute-0 sudo[102599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuxjhonqxfrirsytwxjnqusgptvdvqya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296471.3233705-145-104786403105407/AnsiballZ_selinux.py'
Dec 09 16:07:51 compute-0 sudo[102599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:52 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 09 16:07:52 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 09 16:07:52 compute-0 python3.9[102601]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 09 16:07:52 compute-0 sudo[102599]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:52 compute-0 sudo[102751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqfvusttufbvsmqdmxusgfvioifdgmfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296472.622678-156-165461959947637/AnsiballZ_command.py'
Dec 09 16:07:52 compute-0 sudo[102751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:52 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 09 16:07:53 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 09 16:07:53 compute-0 python3.9[102753]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 09 16:07:53 compute-0 sudo[102751]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:53 compute-0 ceph-mon[75222]: pgmap v254: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:53 compute-0 ceph-mon[75222]: 11.15 scrub starts
Dec 09 16:07:53 compute-0 ceph-mon[75222]: 11.15 scrub ok
Dec 09 16:07:53 compute-0 sudo[102903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saunbrnaqekjpdbhfrtfgijftwihhirx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296473.283272-164-95448814610400/AnsiballZ_file.py'
Dec 09 16:07:53 compute-0 sudo[102903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:53 compute-0 python3.9[102905]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:07:53 compute-0 sudo[102903]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:53 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:54 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 09 16:07:54 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 09 16:07:54 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 09 16:07:54 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 09 16:07:54 compute-0 ceph-mon[75222]: 10.b scrub starts
Dec 09 16:07:54 compute-0 ceph-mon[75222]: 10.b scrub ok
Dec 09 16:07:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:54 compute-0 sshd-session[102938]: Invalid user dspace from 146.190.31.45 port 59458
Dec 09 16:07:54 compute-0 sshd-session[102938]: Connection closed by invalid user dspace 146.190.31.45 port 59458 [preauth]
Dec 09 16:07:54 compute-0 sudo[103057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slkwoearaxsoyoezesvrndiegzvhaljm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296474.0049162-172-137494252683904/AnsiballZ_mount.py'
Dec 09 16:07:54 compute-0 sudo[103057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:54 compute-0 python3.9[103059]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 09 16:07:54 compute-0 sudo[103057]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:55 compute-0 ceph-mon[75222]: pgmap v255: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:55 compute-0 ceph-mon[75222]: 4.5 scrub starts
Dec 09 16:07:55 compute-0 ceph-mon[75222]: 4.5 scrub ok
Dec 09 16:07:55 compute-0 ceph-mon[75222]: 8.15 scrub starts
Dec 09 16:07:55 compute-0 ceph-mon[75222]: 8.15 scrub ok
Dec 09 16:07:55 compute-0 sudo[103209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlremgqwgedivdurfskgrneevtxrezt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296475.480772-200-227573814043691/AnsiballZ_file.py'
Dec 09 16:07:55 compute-0 sudo[103209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:55 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:55 compute-0 python3.9[103211]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:07:56 compute-0 sudo[103209]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:56 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 09 16:07:56 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 09 16:07:56 compute-0 sudo[103361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfsdpnsnplhfvqallfqlkibfdfttoyyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296476.1871846-208-278394635560696/AnsiballZ_stat.py'
Dec 09 16:07:56 compute-0 sudo[103361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:07:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:07:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:07:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:07:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:07:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:07:56 compute-0 python3.9[103363]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:07:56 compute-0 sudo[103361]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:56 compute-0 sudo[103439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifkdhybtmnhxttfoewkmaxflqtwvarfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296476.1871846-208-278394635560696/AnsiballZ_file.py'
Dec 09 16:07:56 compute-0 sudo[103439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:57 compute-0 python3.9[103441]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:07:57 compute-0 sudo[103439]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:57 compute-0 ceph-mon[75222]: pgmap v256: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:57 compute-0 ceph-mon[75222]: 3.1e scrub starts
Dec 09 16:07:57 compute-0 ceph-mon[75222]: 3.1e scrub ok
Dec 09 16:07:57 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 09 16:07:57 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 09 16:07:57 compute-0 sudo[103591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrwhxpqrctjbsdhhtftopbxwiifycwtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296477.5874598-229-22041792713335/AnsiballZ_stat.py'
Dec 09 16:07:57 compute-0 sudo[103591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:57 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:57 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 09 16:07:57 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 09 16:07:58 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 09 16:07:58 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 09 16:07:58 compute-0 python3.9[103593]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:07:58 compute-0 sudo[103591]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:58 compute-0 ceph-mon[75222]: 7.1a scrub starts
Dec 09 16:07:58 compute-0 ceph-mon[75222]: 7.1a scrub ok
Dec 09 16:07:58 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 09 16:07:58 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 09 16:07:59 compute-0 sudo[103745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knxejigilpokydvyycxkfoipsgdruifj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296478.552192-242-213281231706986/AnsiballZ_getent.py'
Dec 09 16:07:59 compute-0 sudo[103745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:59 compute-0 python3.9[103747]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 09 16:07:59 compute-0 sudo[103745]: pam_unix(sudo:session): session closed for user root
Dec 09 16:07:59 compute-0 ceph-mon[75222]: 11.6 scrub starts
Dec 09 16:07:59 compute-0 ceph-mon[75222]: 11.6 scrub ok
Dec 09 16:07:59 compute-0 ceph-mon[75222]: pgmap v257: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:59 compute-0 ceph-mon[75222]: 5.f scrub starts
Dec 09 16:07:59 compute-0 ceph-mon[75222]: 5.f scrub ok
Dec 09 16:07:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:07:59 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 09 16:07:59 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 09 16:07:59 compute-0 sudo[103898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vafpulvarxqmmahdypgajmhfspyqvwac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296479.4834661-252-183127542177582/AnsiballZ_getent.py'
Dec 09 16:07:59 compute-0 sudo[103898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:07:59 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:07:59 compute-0 python3.9[103900]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 09 16:07:59 compute-0 sudo[103898]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:00 compute-0 ceph-mon[75222]: 3.6 scrub starts
Dec 09 16:08:00 compute-0 ceph-mon[75222]: 3.6 scrub ok
Dec 09 16:08:00 compute-0 sudo[104051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tztdcmqtbupbsmoehhkiomiicpwqccdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296480.1514611-260-37242039988119/AnsiballZ_group.py'
Dec 09 16:08:00 compute-0 sudo[104051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:00 compute-0 python3.9[104053]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 16:08:00 compute-0 sudo[104051]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:01 compute-0 sudo[104203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oobgwvcwyusgflpigzohxqsdzmlgaopn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296481.034367-269-120625036563551/AnsiballZ_file.py'
Dec 09 16:08:01 compute-0 sudo[104203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:01 compute-0 ceph-mon[75222]: 8.c scrub starts
Dec 09 16:08:01 compute-0 ceph-mon[75222]: 8.c scrub ok
Dec 09 16:08:01 compute-0 ceph-mon[75222]: pgmap v258: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:01 compute-0 python3.9[104205]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 09 16:08:01 compute-0 sudo[104203]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:01 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:02 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 09 16:08:02 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 09 16:08:02 compute-0 sudo[104355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxjomsdutwncrxiilwjfgvwgugnbbbgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296481.8169794-280-93667092021545/AnsiballZ_dnf.py'
Dec 09 16:08:02 compute-0 sudo[104355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:02 compute-0 python3.9[104357]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:08:02 compute-0 ceph-mon[75222]: 11.8 scrub starts
Dec 09 16:08:02 compute-0 ceph-mon[75222]: 11.8 scrub ok
Dec 09 16:08:03 compute-0 ceph-mon[75222]: pgmap v259: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:03 compute-0 sudo[104355]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:03 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:04 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 09 16:08:04 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 09 16:08:04 compute-0 sudo[104508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vifjyirzzuaobhhepylwryrdqkzkkcaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296483.8862655-288-55029304550757/AnsiballZ_file.py'
Dec 09 16:08:04 compute-0 sudo[104508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:04 compute-0 ceph-mon[75222]: 4.18 scrub starts
Dec 09 16:08:04 compute-0 python3.9[104510]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:08:04 compute-0 ceph-mon[75222]: 4.18 scrub ok
Dec 09 16:08:04 compute-0 sudo[104508]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:04 compute-0 sudo[104660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hadqaaxyonlzolfwsdxmgisndgozccwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296484.6447792-296-176747261391627/AnsiballZ_stat.py'
Dec 09 16:08:04 compute-0 sudo[104660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:05 compute-0 python3.9[104662]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:08:05 compute-0 sudo[104660]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:05 compute-0 ceph-mon[75222]: pgmap v260: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:05 compute-0 sudo[104739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wktwypophiotjqambzxojnzeppwwriql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296484.6447792-296-176747261391627/AnsiballZ_file.py'
Dec 09 16:08:05 compute-0 sudo[104739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:05 compute-0 python3.9[104741]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:08:05 compute-0 sudo[104739]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:05 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:06 compute-0 sudo[104891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gygqdqvnmkfybhbgfrmgaildgleuhryn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296485.86014-309-198596434076172/AnsiballZ_stat.py'
Dec 09 16:08:06 compute-0 sudo[104891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:06 compute-0 python3.9[104893]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:08:06 compute-0 sudo[104891]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:06 compute-0 sudo[104970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcvuyajmzsqwmkatgtguxdkqjuqebtlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296485.86014-309-198596434076172/AnsiballZ_file.py'
Dec 09 16:08:06 compute-0 sudo[104970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:06 compute-0 python3.9[104972]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:08:06 compute-0 sudo[104970]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:06 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 09 16:08:06 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 09 16:08:07 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 09 16:08:07 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 09 16:08:07 compute-0 ceph-mon[75222]: pgmap v261: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:07 compute-0 ceph-mon[75222]: 4.1b scrub starts
Dec 09 16:08:07 compute-0 ceph-mon[75222]: 4.1b scrub ok
Dec 09 16:08:07 compute-0 sudo[105122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voaxgzcgvhuknwrizixjvxlwjgamwsjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296487.2722118-324-112257800297666/AnsiballZ_dnf.py'
Dec 09 16:08:07 compute-0 sudo[105122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:07 compute-0 python3.9[105124]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:08:07 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:07 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 09 16:08:07 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 09 16:08:07 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 09 16:08:07 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 09 16:08:08 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 09 16:08:08 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 09 16:08:08 compute-0 ceph-mon[75222]: 4.9 scrub starts
Dec 09 16:08:08 compute-0 ceph-mon[75222]: 4.9 scrub ok
Dec 09 16:08:08 compute-0 ceph-mon[75222]: 8.2 scrub starts
Dec 09 16:08:08 compute-0 ceph-mon[75222]: 8.2 scrub ok
Dec 09 16:08:08 compute-0 ceph-mon[75222]: 10.17 scrub starts
Dec 09 16:08:08 compute-0 ceph-mon[75222]: 10.17 scrub ok
Dec 09 16:08:08 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 09 16:08:08 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 09 16:08:09 compute-0 sudo[105122]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:09 compute-0 ceph-mon[75222]: pgmap v262: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:09 compute-0 ceph-mon[75222]: 2.d scrub starts
Dec 09 16:08:09 compute-0 ceph-mon[75222]: 2.d scrub ok
Dec 09 16:08:09 compute-0 ceph-mon[75222]: 7.8 scrub starts
Dec 09 16:08:09 compute-0 ceph-mon[75222]: 7.8 scrub ok
Dec 09 16:08:09 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:09 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 09 16:08:09 compute-0 python3.9[105275]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:08:09 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 09 16:08:10 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 09 16:08:10 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 09 16:08:10 compute-0 ceph-mon[75222]: 11.f scrub starts
Dec 09 16:08:10 compute-0 ceph-mon[75222]: 11.f scrub ok
Dec 09 16:08:10 compute-0 python3.9[105427]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 09 16:08:11 compute-0 ceph-mon[75222]: pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:11 compute-0 ceph-mon[75222]: 10.6 scrub starts
Dec 09 16:08:11 compute-0 ceph-mon[75222]: 10.6 scrub ok
Dec 09 16:08:11 compute-0 python3.9[105577]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:08:11 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:12 compute-0 sudo[105727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcdvmnsjjfzufotgvkbkvddvaopeevmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296491.9279368-365-193872668495462/AnsiballZ_systemd.py'
Dec 09 16:08:12 compute-0 sudo[105727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:12 compute-0 python3.9[105729]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:08:12 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 09 16:08:12 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 09 16:08:12 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 09 16:08:13 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 09 16:08:13 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 09 16:08:13 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 09 16:08:13 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 09 16:08:13 compute-0 sudo[105727]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:13 compute-0 ceph-mon[75222]: pgmap v264: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:13 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 09 16:08:13 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 09 16:08:13 compute-0 python3.9[105892]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 09 16:08:13 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:14 compute-0 ceph-mon[75222]: 5.9 scrub starts
Dec 09 16:08:14 compute-0 ceph-mon[75222]: 5.9 scrub ok
Dec 09 16:08:14 compute-0 ceph-mon[75222]: 4.1a scrub starts
Dec 09 16:08:14 compute-0 ceph-mon[75222]: 4.1a scrub ok
Dec 09 16:08:14 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 09 16:08:14 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 09 16:08:15 compute-0 ceph-mon[75222]: pgmap v265: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:15 compute-0 ceph-mon[75222]: 5.16 scrub starts
Dec 09 16:08:15 compute-0 ceph-mon[75222]: 5.16 scrub ok
Dec 09 16:08:15 compute-0 sudo[106042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flnqqylydwzybcrwvrfwagwypgkjthux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296495.473523-422-133393101063923/AnsiballZ_systemd.py'
Dec 09 16:08:15 compute-0 sudo[106042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:15 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:16 compute-0 python3.9[106044]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:08:16 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 09 16:08:16 compute-0 sudo[106042]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:16 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 09 16:08:16 compute-0 sudo[106196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fifqzhjqvhikxocjuzabgchoeektqdll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296496.3335826-422-88211767024565/AnsiballZ_systemd.py'
Dec 09 16:08:16 compute-0 sudo[106196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:16 compute-0 ceph-mon[75222]: pgmap v266: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:16 compute-0 ceph-mon[75222]: 11.e scrub starts
Dec 09 16:08:16 compute-0 ceph-mon[75222]: 11.e scrub ok
Dec 09 16:08:16 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 09 16:08:16 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 09 16:08:16 compute-0 python3.9[106198]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:08:17 compute-0 sudo[106196]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:17 compute-0 sshd-session[99483]: Connection closed by 192.168.122.30 port 44366
Dec 09 16:08:17 compute-0 sshd-session[99480]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:08:17 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 09 16:08:17 compute-0 systemd[1]: session-35.scope: Consumed 1min 6.429s CPU time.
Dec 09 16:08:17 compute-0 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Dec 09 16:08:17 compute-0 systemd-logind[786]: Removed session 35.
Dec 09 16:08:17 compute-0 ceph-mon[75222]: 7.a scrub starts
Dec 09 16:08:17 compute-0 ceph-mon[75222]: 7.a scrub ok
Dec 09 16:08:17 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:18 compute-0 ceph-mon[75222]: pgmap v267: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:18 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 09 16:08:18 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 09 16:08:19 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 09 16:08:19 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 09 16:08:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:19 compute-0 ceph-mon[75222]: 3.e scrub starts
Dec 09 16:08:19 compute-0 ceph-mon[75222]: 3.e scrub ok
Dec 09 16:08:19 compute-0 ceph-mon[75222]: 5.3 scrub starts
Dec 09 16:08:19 compute-0 ceph-mon[75222]: 5.3 scrub ok
Dec 09 16:08:19 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:20 compute-0 ceph-mon[75222]: pgmap v268: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:20 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 09 16:08:20 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 09 16:08:20 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 09 16:08:20 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 09 16:08:21 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 09 16:08:21 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 09 16:08:21 compute-0 ceph-mon[75222]: 4.14 scrub starts
Dec 09 16:08:21 compute-0 ceph-mon[75222]: 4.14 scrub ok
Dec 09 16:08:21 compute-0 ceph-mon[75222]: 11.18 scrub starts
Dec 09 16:08:21 compute-0 ceph-mon[75222]: 11.18 scrub ok
Dec 09 16:08:21 compute-0 ceph-mon[75222]: 8.e scrub starts
Dec 09 16:08:21 compute-0 ceph-mon[75222]: 8.e scrub ok
Dec 09 16:08:21 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 09 16:08:21 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 09 16:08:21 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:22 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 09 16:08:22 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 09 16:08:22 compute-0 ceph-mon[75222]: 7.2 scrub starts
Dec 09 16:08:22 compute-0 ceph-mon[75222]: 7.2 scrub ok
Dec 09 16:08:22 compute-0 ceph-mon[75222]: pgmap v269: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:22 compute-0 ceph-mon[75222]: 7.3 scrub starts
Dec 09 16:08:22 compute-0 ceph-mon[75222]: 7.3 scrub ok
Dec 09 16:08:22 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 09 16:08:22 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 09 16:08:22 compute-0 sshd-session[106225]: Accepted publickey for zuul from 192.168.122.30 port 41146 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:08:22 compute-0 systemd-logind[786]: New session 36 of user zuul.
Dec 09 16:08:22 compute-0 systemd[1]: Started Session 36 of User zuul.
Dec 09 16:08:23 compute-0 sshd-session[106225]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:08:23 compute-0 ceph-mon[75222]: 8.1b scrub starts
Dec 09 16:08:23 compute-0 ceph-mon[75222]: 8.1b scrub ok
Dec 09 16:08:23 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 09 16:08:23 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 09 16:08:23 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 09 16:08:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 09 16:08:24 compute-0 python3.9[106378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:08:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:24 compute-0 ceph-mon[75222]: 10.19 scrub starts
Dec 09 16:08:24 compute-0 ceph-mon[75222]: 10.19 scrub ok
Dec 09 16:08:24 compute-0 ceph-mon[75222]: pgmap v270: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:24 compute-0 ceph-mon[75222]: 5.2 scrub starts
Dec 09 16:08:24 compute-0 ceph-mon[75222]: 5.2 scrub ok
Dec 09 16:08:24 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 09 16:08:24 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 09 16:08:25 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 09 16:08:25 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 09 16:08:25 compute-0 sudo[106532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cronjmanltyroevwzodhlhtvkajczphf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296504.634092-36-99704234275731/AnsiballZ_getent.py'
Dec 09 16:08:25 compute-0 sudo[106532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:25 compute-0 python3.9[106534]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 09 16:08:25 compute-0 sudo[106532]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:25 compute-0 ceph-mon[75222]: 2.15 scrub starts
Dec 09 16:08:25 compute-0 ceph-mon[75222]: 2.15 scrub ok
Dec 09 16:08:25 compute-0 ceph-mon[75222]: 7.f scrub starts
Dec 09 16:08:25 compute-0 ceph-mon[75222]: 7.f scrub ok
Dec 09 16:08:25 compute-0 sudo[106685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytaxzdcfeynccaiurhjnwbgcpmznbmib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296505.6100261-48-250568456526384/AnsiballZ_setup.py'
Dec 09 16:08:25 compute-0 sudo[106685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:08:25
Dec 09 16:08:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:08:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:08:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'vms', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec 09 16:08:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:08:25 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:26 compute-0 python3.9[106687]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:08:26 compute-0 sudo[106685]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:08:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:08:26 compute-0 ceph-mon[75222]: pgmap v271: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:26 compute-0 sudo[106769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcbqmgwzlkgojadttpviybvscbrnhqyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296505.6100261-48-250568456526384/AnsiballZ_dnf.py'
Dec 09 16:08:26 compute-0 sudo[106769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:27 compute-0 python3.9[106771]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 16:08:27 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:28 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 09 16:08:28 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 09 16:08:28 compute-0 sudo[106769]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:28 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 09 16:08:28 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 09 16:08:28 compute-0 sudo[106922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcrdtqkgxfhggbazvcwymrjeeuglxxlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296508.665138-62-163116351772128/AnsiballZ_dnf.py'
Dec 09 16:08:28 compute-0 sudo[106922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:29 compute-0 ceph-mon[75222]: pgmap v272: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:29 compute-0 ceph-mon[75222]: 2.b scrub starts
Dec 09 16:08:29 compute-0 ceph-mon[75222]: 2.b scrub ok
Dec 09 16:08:29 compute-0 python3.9[106924]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:08:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:29 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 09 16:08:29 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 09 16:08:29 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:30 compute-0 ceph-mon[75222]: 5.12 scrub starts
Dec 09 16:08:30 compute-0 ceph-mon[75222]: 5.12 scrub ok
Dec 09 16:08:30 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 09 16:08:30 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 09 16:08:30 compute-0 sudo[106922]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:31 compute-0 ceph-mon[75222]: 8.4 scrub starts
Dec 09 16:08:31 compute-0 ceph-mon[75222]: 8.4 scrub ok
Dec 09 16:08:31 compute-0 ceph-mon[75222]: pgmap v273: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:31 compute-0 ceph-mon[75222]: 3.a scrub starts
Dec 09 16:08:31 compute-0 ceph-mon[75222]: 3.a scrub ok
Dec 09 16:08:31 compute-0 sudo[107075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaxopevhnyelmtnwpavcswnfyqodozjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296510.6900501-70-7110002675738/AnsiballZ_systemd.py'
Dec 09 16:08:31 compute-0 sudo[107075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:31 compute-0 python3.9[107077]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 16:08:31 compute-0 sudo[107075]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:31 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:32 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 09 16:08:32 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 09 16:08:32 compute-0 sudo[107126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:08:32 compute-0 sudo[107126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:32 compute-0 sudo[107126]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:32 compute-0 sudo[107176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:08:32 compute-0 sudo[107176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:32 compute-0 python3.9[107282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:08:32 compute-0 sudo[107176]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:08:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:08:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:08:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:08:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:08:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:08:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:08:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:08:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:08:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:08:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:08:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:08:32 compute-0 sudo[107330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:08:32 compute-0 sudo[107330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:32 compute-0 sudo[107330]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:32 compute-0 sudo[107361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:08:32 compute-0 sudo[107361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:33 compute-0 ceph-mon[75222]: pgmap v274: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:33 compute-0 ceph-mon[75222]: 3.9 scrub starts
Dec 09 16:08:33 compute-0 ceph-mon[75222]: 3.9 scrub ok
Dec 09 16:08:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:08:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:08:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:08:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:08:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:08:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:08:33 compute-0 podman[107450]: 2025-12-09 16:08:33.277354609 +0000 UTC m=+0.052831423 container create a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:08:33 compute-0 systemd[76651]: Created slice User Background Tasks Slice.
Dec 09 16:08:33 compute-0 systemd[76651]: Starting Cleanup of User's Temporary Files and Directories...
Dec 09 16:08:33 compute-0 systemd[1]: Started libpod-conmon-a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5.scope.
Dec 09 16:08:33 compute-0 systemd[76651]: Finished Cleanup of User's Temporary Files and Directories.
Dec 09 16:08:33 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:08:33 compute-0 podman[107450]: 2025-12-09 16:08:33.256205211 +0000 UTC m=+0.031682055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:08:33 compute-0 podman[107450]: 2025-12-09 16:08:33.358687111 +0000 UTC m=+0.134163945 container init a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:08:33 compute-0 podman[107450]: 2025-12-09 16:08:33.367845209 +0000 UTC m=+0.143322013 container start a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:08:33 compute-0 podman[107450]: 2025-12-09 16:08:33.37096144 +0000 UTC m=+0.146438254 container attach a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:08:33 compute-0 fervent_goldstine[107484]: 167 167
Dec 09 16:08:33 compute-0 systemd[1]: libpod-a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5.scope: Deactivated successfully.
Dec 09 16:08:33 compute-0 podman[107450]: 2025-12-09 16:08:33.375904794 +0000 UTC m=+0.151381598 container died a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:08:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf4b7f80dafed0056cf09d6139ae022aa6e835612f3b59badb35750edb7b7dd-merged.mount: Deactivated successfully.
Dec 09 16:08:33 compute-0 podman[107450]: 2025-12-09 16:08:33.414119779 +0000 UTC m=+0.189596593 container remove a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 09 16:08:33 compute-0 systemd[1]: libpod-conmon-a987aa5781b2de35441503e3ce33446627e4a7f98b0528b931e550fb485545f5.scope: Deactivated successfully.
Dec 09 16:08:33 compute-0 sudo[107558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppqhuktyxkbimuzfscdwmmjsqkjvdtmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296512.9954307-88-105155834883731/AnsiballZ_sefcontext.py'
Dec 09 16:08:33 compute-0 sudo[107558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:33 compute-0 podman[107566]: 2025-12-09 16:08:33.577542186 +0000 UTC m=+0.045379234 container create 0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_grothendieck, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:08:33 compute-0 systemd[1]: Started libpod-conmon-0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f.scope.
Dec 09 16:08:33 compute-0 podman[107566]: 2025-12-09 16:08:33.555164714 +0000 UTC m=+0.023001792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:08:33 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0fcd113ae40c903df0fd48f1ba04cc85abb9458dbc4fba6e0a5aafdb4a75a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0fcd113ae40c903df0fd48f1ba04cc85abb9458dbc4fba6e0a5aafdb4a75a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0fcd113ae40c903df0fd48f1ba04cc85abb9458dbc4fba6e0a5aafdb4a75a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0fcd113ae40c903df0fd48f1ba04cc85abb9458dbc4fba6e0a5aafdb4a75a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0fcd113ae40c903df0fd48f1ba04cc85abb9458dbc4fba6e0a5aafdb4a75a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:33 compute-0 podman[107566]: 2025-12-09 16:08:33.676436592 +0000 UTC m=+0.144273680 container init 0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_grothendieck, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:08:33 compute-0 podman[107566]: 2025-12-09 16:08:33.682179459 +0000 UTC m=+0.150016507 container start 0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:08:33 compute-0 podman[107566]: 2025-12-09 16:08:33.686248408 +0000 UTC m=+0.154085506 container attach 0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 09 16:08:33 compute-0 python3.9[107561]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 09 16:08:33 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:33 compute-0 sudo[107558]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:34 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 09 16:08:34 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 09 16:08:34 compute-0 inspiring_grothendieck[107583]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:08:34 compute-0 inspiring_grothendieck[107583]: --> All data devices are unavailable
Dec 09 16:08:34 compute-0 systemd[1]: libpod-0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f.scope: Deactivated successfully.
Dec 09 16:08:34 compute-0 podman[107566]: 2025-12-09 16:08:34.336043866 +0000 UTC m=+0.803880904 container died 0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e0fcd113ae40c903df0fd48f1ba04cc85abb9458dbc4fba6e0a5aafdb4a75a6-merged.mount: Deactivated successfully.
Dec 09 16:08:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:34 compute-0 podman[107566]: 2025-12-09 16:08:34.378130304 +0000 UTC m=+0.845967342 container remove 0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_grothendieck, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:08:34 compute-0 systemd[1]: libpod-conmon-0ff994437a5e1e2f34a125235b6011ffe63dfb408dc3e9ddc7d60c9f4b7ff43f.scope: Deactivated successfully.
Dec 09 16:08:34 compute-0 sudo[107361]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:34 compute-0 sudo[107713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:08:34 compute-0 sudo[107713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:34 compute-0 sudo[107713]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:34 compute-0 sudo[107762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:08:34 compute-0 sudo[107762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:34 compute-0 python3.9[107811]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:08:34 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 09 16:08:34 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 09 16:08:34 compute-0 podman[107825]: 2025-12-09 16:08:34.895715234 +0000 UTC m=+0.054939924 container create 6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_galois, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:08:34 compute-0 systemd[1]: Started libpod-conmon-6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da.scope.
Dec 09 16:08:34 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:08:34 compute-0 podman[107825]: 2025-12-09 16:08:34.878181413 +0000 UTC m=+0.037406123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:08:34 compute-0 podman[107825]: 2025-12-09 16:08:34.977385087 +0000 UTC m=+0.136609797 container init 6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:08:34 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 09 16:08:34 compute-0 podman[107825]: 2025-12-09 16:08:34.987711048 +0000 UTC m=+0.146935778 container start 6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:08:34 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 09 16:08:34 compute-0 podman[107825]: 2025-12-09 16:08:34.991305653 +0000 UTC m=+0.150530343 container attach 6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:08:34 compute-0 naughty_galois[107845]: 167 167
Dec 09 16:08:34 compute-0 systemd[1]: libpod-6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da.scope: Deactivated successfully.
Dec 09 16:08:34 compute-0 conmon[107845]: conmon 6d7a9d16ab2284841205 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da.scope/container/memory.events
Dec 09 16:08:34 compute-0 podman[107825]: 2025-12-09 16:08:34.995910877 +0000 UTC m=+0.155135627 container died 6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_galois, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-47de34b8f6f200c9f15092d6e3f76697575ac5e3ec7bf832369cc592ecee3d2f-merged.mount: Deactivated successfully.
Dec 09 16:08:35 compute-0 podman[107825]: 2025-12-09 16:08:35.041539009 +0000 UTC m=+0.200763689 container remove 6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:08:35 compute-0 systemd[1]: libpod-conmon-6d7a9d16ab2284841205946b608dca252b1ce39ec26326f3ca68d482bd9937da.scope: Deactivated successfully.
Dec 09 16:08:35 compute-0 ceph-mon[75222]: pgmap v275: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:35 compute-0 ceph-mon[75222]: 7.13 scrub starts
Dec 09 16:08:35 compute-0 ceph-mon[75222]: 7.13 scrub ok
Dec 09 16:08:35 compute-0 podman[107896]: 2025-12-09 16:08:35.210004194 +0000 UTC m=+0.052711079 container create dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:08:35 compute-0 systemd[1]: Started libpod-conmon-dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31.scope.
Dec 09 16:08:35 compute-0 podman[107896]: 2025-12-09 16:08:35.189579298 +0000 UTC m=+0.032286173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:08:35 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f483cd23a3f4ca8fff180ee3eb705ebd73ef8c53ddd66aae314f83277eb379f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f483cd23a3f4ca8fff180ee3eb705ebd73ef8c53ddd66aae314f83277eb379f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f483cd23a3f4ca8fff180ee3eb705ebd73ef8c53ddd66aae314f83277eb379f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f483cd23a3f4ca8fff180ee3eb705ebd73ef8c53ddd66aae314f83277eb379f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:35 compute-0 podman[107896]: 2025-12-09 16:08:35.303496651 +0000 UTC m=+0.146203516 container init dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_satoshi, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:08:35 compute-0 podman[107896]: 2025-12-09 16:08:35.312093312 +0000 UTC m=+0.154800157 container start dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_satoshi, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:08:35 compute-0 podman[107896]: 2025-12-09 16:08:35.315597184 +0000 UTC m=+0.158304079 container attach dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_satoshi, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:08:35 compute-0 bold_satoshi[107929]: {
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:     "0": [
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:         {
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "devices": [
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "/dev/loop3"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             ],
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_name": "ceph_lv0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_size": "21470642176",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "name": "ceph_lv0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "tags": {
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cluster_name": "ceph",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.crush_device_class": "",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.encrypted": "0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.objectstore": "bluestore",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osd_id": "0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:08:35 compute-0 sudo[108046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvqppmoxbbreiodqgvfldiwzebdzvcgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296515.276718-106-22332540693114/AnsiballZ_dnf.py'
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.type": "block",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.vdo": "0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.with_tpm": "0"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             },
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "type": "block",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "vg_name": "ceph_vg0"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:         }
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:     ],
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:     "1": [
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:         {
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "devices": [
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "/dev/loop4"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             ],
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_name": "ceph_lv1",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_size": "21470642176",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "name": "ceph_lv1",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "tags": {
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cluster_name": "ceph",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.crush_device_class": "",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.encrypted": "0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.objectstore": "bluestore",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osd_id": "1",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.type": "block",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.vdo": "0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.with_tpm": "0"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             },
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "type": "block",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "vg_name": "ceph_vg1"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:         }
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:     ],
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:     "2": [
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:         {
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "devices": [
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "/dev/loop5"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             ],
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_name": "ceph_lv2",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_size": "21470642176",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "name": "ceph_lv2",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "tags": {
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.cluster_name": "ceph",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.crush_device_class": "",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.encrypted": "0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.objectstore": "bluestore",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osd_id": "2",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.type": "block",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.vdo": "0",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:                 "ceph.with_tpm": "0"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             },
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "type": "block",
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:             "vg_name": "ceph_vg2"
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:         }
Dec 09 16:08:35 compute-0 bold_satoshi[107929]:     ]
Dec 09 16:08:35 compute-0 bold_satoshi[107929]: }
Dec 09 16:08:35 compute-0 sudo[108046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:35 compute-0 systemd[1]: libpod-dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31.scope: Deactivated successfully.
Dec 09 16:08:35 compute-0 podman[107896]: 2025-12-09 16:08:35.623163397 +0000 UTC m=+0.465870242 container died dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 09 16:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f483cd23a3f4ca8fff180ee3eb705ebd73ef8c53ddd66aae314f83277eb379f-merged.mount: Deactivated successfully.
Dec 09 16:08:35 compute-0 podman[107896]: 2025-12-09 16:08:35.662594428 +0000 UTC m=+0.505301283 container remove dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:08:35 compute-0 systemd[1]: libpod-conmon-dce75539632d1e8f44404c95fa79c29fbc764defbab56bdab512b064e745ca31.scope: Deactivated successfully.
Dec 09 16:08:35 compute-0 sudo[107762]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:35 compute-0 sudo[108060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:08:35 compute-0 sudo[108060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:35 compute-0 sudo[108060]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:35 compute-0 sudo[108085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:08:35 compute-0 sudo[108085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:35 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 09 16:08:35 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 09 16:08:35 compute-0 python3.9[108048]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:08:35 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:35 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 09 16:08:35 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 09 16:08:36 compute-0 ceph-mon[75222]: 4.e scrub starts
Dec 09 16:08:36 compute-0 ceph-mon[75222]: 4.e scrub ok
Dec 09 16:08:36 compute-0 ceph-mon[75222]: 3.17 scrub starts
Dec 09 16:08:36 compute-0 ceph-mon[75222]: 3.17 scrub ok
Dec 09 16:08:36 compute-0 podman[108123]: 2025-12-09 16:08:36.120161507 +0000 UTC m=+0.047219428 container create 539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:08:36 compute-0 systemd[1]: Started libpod-conmon-539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b.scope.
Dec 09 16:08:36 compute-0 podman[108123]: 2025-12-09 16:08:36.096868528 +0000 UTC m=+0.023926509 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:08:36 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:08:36 compute-0 podman[108123]: 2025-12-09 16:08:36.216095536 +0000 UTC m=+0.143153477 container init 539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:08:36 compute-0 podman[108123]: 2025-12-09 16:08:36.222196644 +0000 UTC m=+0.149254535 container start 539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_feynman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 09 16:08:36 compute-0 podman[108123]: 2025-12-09 16:08:36.225352726 +0000 UTC m=+0.152410657 container attach 539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 09 16:08:36 compute-0 flamboyant_feynman[108140]: 167 167
Dec 09 16:08:36 compute-0 systemd[1]: libpod-539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b.scope: Deactivated successfully.
Dec 09 16:08:36 compute-0 podman[108123]: 2025-12-09 16:08:36.22891232 +0000 UTC m=+0.155970211 container died 539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b96f8f25788e5227c135f726875eb3927e59ff9d09f59f441b65f438c125aaa-merged.mount: Deactivated successfully.
Dec 09 16:08:36 compute-0 podman[108123]: 2025-12-09 16:08:36.272453971 +0000 UTC m=+0.199511852 container remove 539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_feynman, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:08:36 compute-0 systemd[1]: libpod-conmon-539128ca235f2428b10bf13abb11e43f0181f16d1506b69ef39ed8cc9fd52d8b.scope: Deactivated successfully.
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:08:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:08:36 compute-0 podman[108164]: 2025-12-09 16:08:36.466853052 +0000 UTC m=+0.064869263 container create 5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 09 16:08:36 compute-0 systemd[1]: Started libpod-conmon-5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f.scope.
Dec 09 16:08:36 compute-0 podman[108164]: 2025-12-09 16:08:36.436816636 +0000 UTC m=+0.034832867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:08:36 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d595cadcfb30b33ccdb782974699b093464eef6de776d47cefd54ae96b4218f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d595cadcfb30b33ccdb782974699b093464eef6de776d47cefd54ae96b4218f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d595cadcfb30b33ccdb782974699b093464eef6de776d47cefd54ae96b4218f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d595cadcfb30b33ccdb782974699b093464eef6de776d47cefd54ae96b4218f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:08:36 compute-0 podman[108164]: 2025-12-09 16:08:36.568131977 +0000 UTC m=+0.166148218 container init 5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williamson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:08:36 compute-0 podman[108164]: 2025-12-09 16:08:36.574840453 +0000 UTC m=+0.172856654 container start 5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 09 16:08:36 compute-0 podman[108164]: 2025-12-09 16:08:36.578420757 +0000 UTC m=+0.176437008 container attach 5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williamson, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:08:37 compute-0 ceph-mon[75222]: 5.13 scrub starts
Dec 09 16:08:37 compute-0 ceph-mon[75222]: 5.13 scrub ok
Dec 09 16:08:37 compute-0 ceph-mon[75222]: pgmap v276: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:37 compute-0 ceph-mon[75222]: 2.16 scrub starts
Dec 09 16:08:37 compute-0 ceph-mon[75222]: 2.16 scrub ok
Dec 09 16:08:37 compute-0 sudo[108046]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:37 compute-0 lvm[108284]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:08:37 compute-0 lvm[108284]: VG ceph_vg1 finished
Dec 09 16:08:37 compute-0 lvm[108280]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:08:37 compute-0 lvm[108280]: VG ceph_vg0 finished
Dec 09 16:08:37 compute-0 lvm[108286]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:08:37 compute-0 lvm[108286]: VG ceph_vg2 finished
Dec 09 16:08:37 compute-0 lvm[108287]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:08:37 compute-0 lvm[108287]: VG ceph_vg1 finished
Dec 09 16:08:37 compute-0 relaxed_williamson[108181]: {}
Dec 09 16:08:37 compute-0 systemd[1]: libpod-5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f.scope: Deactivated successfully.
Dec 09 16:08:37 compute-0 podman[108164]: 2025-12-09 16:08:37.414385415 +0000 UTC m=+1.012401646 container died 5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:08:37 compute-0 systemd[1]: libpod-5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f.scope: Consumed 1.275s CPU time.
Dec 09 16:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d595cadcfb30b33ccdb782974699b093464eef6de776d47cefd54ae96b4218f-merged.mount: Deactivated successfully.
Dec 09 16:08:37 compute-0 podman[108164]: 2025-12-09 16:08:37.460548752 +0000 UTC m=+1.058564963 container remove 5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williamson, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:08:37 compute-0 systemd[1]: libpod-conmon-5155aa9ccd935787f0ce7e7281360e49f0d0fa72dd3e8b6d29f4f365b8188f8f.scope: Deactivated successfully.
Dec 09 16:08:37 compute-0 sudo[108085]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:08:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:08:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:08:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:08:37 compute-0 sudo[108353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:08:37 compute-0 sudo[108353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:08:37 compute-0 sudo[108353]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:37 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 09 16:08:37 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 09 16:08:37 compute-0 sudo[108451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpvohoswtpzlohtiopbemvmjwjqfnnke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296517.3944714-114-270765765135775/AnsiballZ_command.py'
Dec 09 16:08:37 compute-0 sudo[108451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:37 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 09 16:08:37 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 09 16:08:37 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:38 compute-0 python3.9[108453]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:08:38 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 09 16:08:38 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 09 16:08:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:08:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:08:38 compute-0 ceph-mon[75222]: 4.1 scrub starts
Dec 09 16:08:38 compute-0 ceph-mon[75222]: 4.1 scrub ok
Dec 09 16:08:38 compute-0 ceph-mon[75222]: 10.1e scrub starts
Dec 09 16:08:38 compute-0 ceph-mon[75222]: 10.1e scrub ok
Dec 09 16:08:38 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 09 16:08:38 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 09 16:08:38 compute-0 sudo[108451]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:38 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 09 16:08:38 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 09 16:08:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:39 compute-0 sudo[108738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgvhvxjduaglkiqiwkmfszgoefsarsnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296519.001349-122-50347873028919/AnsiballZ_file.py'
Dec 09 16:08:39 compute-0 ceph-mon[75222]: 2.17 scrub starts
Dec 09 16:08:39 compute-0 ceph-mon[75222]: 2.17 scrub ok
Dec 09 16:08:39 compute-0 ceph-mon[75222]: pgmap v277: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:39 compute-0 ceph-mon[75222]: 11.1a scrub starts
Dec 09 16:08:39 compute-0 ceph-mon[75222]: 11.1a scrub ok
Dec 09 16:08:39 compute-0 sudo[108738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:39 compute-0 python3.9[108740]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 09 16:08:39 compute-0 sudo[108738]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:39 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 09 16:08:39 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 09 16:08:39 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:40 compute-0 python3.9[108892]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:08:40 compute-0 ceph-mon[75222]: 4.10 scrub starts
Dec 09 16:08:40 compute-0 ceph-mon[75222]: 4.10 scrub ok
Dec 09 16:08:40 compute-0 sshd-session[108875]: Invalid user dspace from 146.190.31.45 port 50454
Dec 09 16:08:40 compute-0 sshd-session[108875]: Connection closed by invalid user dspace 146.190.31.45 port 50454 [preauth]
Dec 09 16:08:40 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 09 16:08:40 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 09 16:08:41 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 09 16:08:41 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 09 16:08:41 compute-0 sudo[109044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaxckgkxthhhgjmsmnwnvkxopzpumudl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296520.7474253-138-97028279325080/AnsiballZ_dnf.py'
Dec 09 16:08:41 compute-0 sudo[109044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:41 compute-0 python3.9[109046]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:08:41 compute-0 ceph-mon[75222]: 4.12 scrub starts
Dec 09 16:08:41 compute-0 ceph-mon[75222]: 4.12 scrub ok
Dec 09 16:08:41 compute-0 ceph-mon[75222]: pgmap v278: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:41 compute-0 ceph-mon[75222]: 8.1d scrub starts
Dec 09 16:08:41 compute-0 ceph-mon[75222]: 8.1d scrub ok
Dec 09 16:08:41 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 09 16:08:41 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 09 16:08:41 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:41 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 09 16:08:41 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 09 16:08:42 compute-0 ceph-mon[75222]: 5.11 scrub starts
Dec 09 16:08:42 compute-0 ceph-mon[75222]: 5.11 scrub ok
Dec 09 16:08:42 compute-0 ceph-mon[75222]: 3.11 scrub starts
Dec 09 16:08:42 compute-0 ceph-mon[75222]: 3.11 scrub ok
Dec 09 16:08:42 compute-0 ceph-mon[75222]: 11.1 scrub starts
Dec 09 16:08:42 compute-0 ceph-mon[75222]: 11.1 scrub ok
Dec 09 16:08:42 compute-0 sudo[109044]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:42 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 09 16:08:42 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 09 16:08:43 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 09 16:08:43 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 09 16:08:43 compute-0 sudo[109197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glsvdcwwtfxanrhceeqcpnnovouxmxqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296522.8156314-147-145328752084546/AnsiballZ_dnf.py'
Dec 09 16:08:43 compute-0 sudo[109197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:43 compute-0 python3.9[109199]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:08:43 compute-0 ceph-mon[75222]: pgmap v279: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:43 compute-0 ceph-mon[75222]: 10.1a scrub starts
Dec 09 16:08:43 compute-0 ceph-mon[75222]: 10.1a scrub ok
Dec 09 16:08:43 compute-0 ceph-mon[75222]: 3.15 scrub starts
Dec 09 16:08:43 compute-0 ceph-mon[75222]: 3.15 scrub ok
Dec 09 16:08:43 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 09 16:08:43 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 09 16:08:43 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:44 compute-0 ceph-mon[75222]: 7.15 scrub starts
Dec 09 16:08:44 compute-0 ceph-mon[75222]: 7.15 scrub ok
Dec 09 16:08:44 compute-0 sudo[109197]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:44 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 09 16:08:44 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 09 16:08:45 compute-0 sudo[109350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnpglzejxfedoirpunnbmwwtawommzfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296525.1542287-159-214784901467133/AnsiballZ_stat.py'
Dec 09 16:08:45 compute-0 sudo[109350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:45 compute-0 python3.9[109352]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:08:45 compute-0 ceph-mon[75222]: pgmap v280: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:45 compute-0 ceph-mon[75222]: 10.12 scrub starts
Dec 09 16:08:45 compute-0 ceph-mon[75222]: 10.12 scrub ok
Dec 09 16:08:45 compute-0 sudo[109350]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:45 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 09 16:08:45 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 09 16:08:45 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:46 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec 09 16:08:46 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec 09 16:08:46 compute-0 sudo[109504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrwrfztjfiutqphnxieeqpuausvjuvul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296525.7806926-167-255936862518821/AnsiballZ_slurp.py'
Dec 09 16:08:46 compute-0 sudo[109504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:08:46 compute-0 python3.9[109506]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 09 16:08:46 compute-0 sudo[109504]: pam_unix(sudo:session): session closed for user root
Dec 09 16:08:46 compute-0 ceph-mon[75222]: 10.14 scrub starts
Dec 09 16:08:46 compute-0 ceph-mon[75222]: 10.14 scrub ok
Dec 09 16:08:46 compute-0 ceph-mon[75222]: 10.1 scrub starts
Dec 09 16:08:46 compute-0 ceph-mon[75222]: 10.1 scrub ok
Dec 09 16:08:47 compute-0 sshd-session[106228]: Connection closed by 192.168.122.30 port 41146
Dec 09 16:08:47 compute-0 sshd-session[106225]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:08:47 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Dec 09 16:08:47 compute-0 systemd[1]: session-36.scope: Consumed 18.643s CPU time.
Dec 09 16:08:47 compute-0 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Dec 09 16:08:47 compute-0 systemd-logind[786]: Removed session 36.
Dec 09 16:08:47 compute-0 ceph-mon[75222]: pgmap v281: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:47 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 09 16:08:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 09 16:08:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:49 compute-0 ceph-mon[75222]: pgmap v282: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:49 compute-0 ceph-mon[75222]: 8.1f scrub starts
Dec 09 16:08:49 compute-0 ceph-mon[75222]: 8.1f scrub ok
Dec 09 16:08:49 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 09 16:08:49 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 09 16:08:49 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 09 16:08:50 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 09 16:08:50 compute-0 ceph-mon[75222]: 6.2 scrub starts
Dec 09 16:08:50 compute-0 ceph-mon[75222]: 6.2 scrub ok
Dec 09 16:08:50 compute-0 ceph-mon[75222]: 2.8 scrub starts
Dec 09 16:08:50 compute-0 ceph-mon[75222]: 2.8 scrub ok
Dec 09 16:08:51 compute-0 ceph-mon[75222]: pgmap v283: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:51 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 09 16:08:51 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 09 16:08:51 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:52 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 09 16:08:52 compute-0 ceph-mon[75222]: 6.6 scrub starts
Dec 09 16:08:52 compute-0 ceph-mon[75222]: 6.6 scrub ok
Dec 09 16:08:52 compute-0 ceph-mon[75222]: pgmap v284: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:52 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 09 16:08:52 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 09 16:08:52 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 09 16:08:53 compute-0 sshd-session[109531]: Connection reset by 147.185.132.123 port 58038 [preauth]
Dec 09 16:08:53 compute-0 sshd-session[109533]: Accepted publickey for zuul from 192.168.122.30 port 37272 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:08:53 compute-0 systemd-logind[786]: New session 37 of user zuul.
Dec 09 16:08:53 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 09 16:08:53 compute-0 sshd-session[109533]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:08:53 compute-0 ceph-mon[75222]: 6.4 scrub starts
Dec 09 16:08:53 compute-0 ceph-mon[75222]: 6.4 scrub ok
Dec 09 16:08:53 compute-0 ceph-mon[75222]: 11.1b scrub starts
Dec 09 16:08:53 compute-0 ceph-mon[75222]: 11.1b scrub ok
Dec 09 16:08:53 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:54 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 09 16:08:54 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 09 16:08:54 compute-0 python3.9[109686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:08:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:54 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 09 16:08:54 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 09 16:08:54 compute-0 ceph-mon[75222]: pgmap v285: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:54 compute-0 ceph-mon[75222]: 8.18 scrub starts
Dec 09 16:08:54 compute-0 ceph-mon[75222]: 8.18 scrub ok
Dec 09 16:08:54 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 09 16:08:54 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 09 16:08:55 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 09 16:08:55 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 09 16:08:55 compute-0 python3.9[109840]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:08:55 compute-0 ceph-mon[75222]: 6.d scrub starts
Dec 09 16:08:55 compute-0 ceph-mon[75222]: 6.d scrub ok
Dec 09 16:08:55 compute-0 ceph-mon[75222]: 7.11 scrub starts
Dec 09 16:08:55 compute-0 ceph-mon[75222]: 7.11 scrub ok
Dec 09 16:08:55 compute-0 ceph-mon[75222]: 5.15 scrub starts
Dec 09 16:08:55 compute-0 ceph-mon[75222]: 5.15 scrub ok
Dec 09 16:08:55 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 09 16:08:55 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 09 16:08:55 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:56 compute-0 python3.9[110033]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:08:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:08:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:08:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:08:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:08:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:08:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:08:56 compute-0 ceph-mon[75222]: 11.1c scrub starts
Dec 09 16:08:56 compute-0 ceph-mon[75222]: 11.1c scrub ok
Dec 09 16:08:56 compute-0 ceph-mon[75222]: pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:56 compute-0 sshd-session[109536]: Connection closed by 192.168.122.30 port 37272
Dec 09 16:08:56 compute-0 sshd-session[109533]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:08:56 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Dec 09 16:08:56 compute-0 systemd[1]: session-37.scope: Consumed 2.534s CPU time.
Dec 09 16:08:56 compute-0 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Dec 09 16:08:56 compute-0 systemd-logind[786]: Removed session 37.
Dec 09 16:08:57 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 09 16:08:57 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 09 16:08:57 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:58 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 09 16:08:58 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 09 16:08:59 compute-0 ceph-mon[75222]: 4.a scrub starts
Dec 09 16:08:59 compute-0 ceph-mon[75222]: 4.a scrub ok
Dec 09 16:08:59 compute-0 ceph-mon[75222]: pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:08:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:08:59 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:00 compute-0 ceph-mon[75222]: 6.e scrub starts
Dec 09 16:09:00 compute-0 ceph-mon[75222]: 6.e scrub ok
Dec 09 16:09:00 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 09 16:09:00 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 09 16:09:01 compute-0 ceph-mon[75222]: pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:01 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 09 16:09:01 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 09 16:09:01 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:02 compute-0 ceph-mon[75222]: 6.1 scrub starts
Dec 09 16:09:02 compute-0 ceph-mon[75222]: 6.1 scrub ok
Dec 09 16:09:02 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 09 16:09:02 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 09 16:09:03 compute-0 ceph-mon[75222]: 6.c scrub starts
Dec 09 16:09:03 compute-0 ceph-mon[75222]: 6.c scrub ok
Dec 09 16:09:03 compute-0 ceph-mon[75222]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:03 compute-0 sshd-session[110059]: Accepted publickey for zuul from 192.168.122.30 port 50558 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:09:03 compute-0 systemd-logind[786]: New session 38 of user zuul.
Dec 09 16:09:03 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 09 16:09:03 compute-0 sshd-session[110059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:09:03 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:04 compute-0 ceph-mon[75222]: 11.1e scrub starts
Dec 09 16:09:04 compute-0 ceph-mon[75222]: 11.1e scrub ok
Dec 09 16:09:04 compute-0 python3.9[110212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:09:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:04 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 09 16:09:04 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 09 16:09:04 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 09 16:09:04 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 09 16:09:05 compute-0 python3.9[110366]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:09:05 compute-0 ceph-mon[75222]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:05 compute-0 ceph-mon[75222]: 6.b scrub starts
Dec 09 16:09:05 compute-0 ceph-mon[75222]: 6.b scrub ok
Dec 09 16:09:05 compute-0 ceph-mon[75222]: 3.16 scrub starts
Dec 09 16:09:05 compute-0 ceph-mon[75222]: 3.16 scrub ok
Dec 09 16:09:05 compute-0 sudo[110520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcujoaiappjliypwfvtlnhfzdqgpvcdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296545.5682483-40-264992149856732/AnsiballZ_setup.py'
Dec 09 16:09:05 compute-0 sudo[110520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:05 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:06 compute-0 python3.9[110522]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:09:06 compute-0 sudo[110520]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:06 compute-0 sudo[110604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wopwsazykhlidwnkjrphqybocvumsxfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296545.5682483-40-264992149856732/AnsiballZ_dnf.py'
Dec 09 16:09:06 compute-0 sudo[110604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:07 compute-0 python3.9[110606]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:09:07 compute-0 ceph-mon[75222]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:07 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 09 16:09:07 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 09 16:09:07 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 09 16:09:07 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 09 16:09:07 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:08 compute-0 ceph-mon[75222]: 9.15 scrub starts
Dec 09 16:09:08 compute-0 ceph-mon[75222]: 9.15 scrub ok
Dec 09 16:09:08 compute-0 sudo[110604]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:08 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 09 16:09:08 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 09 16:09:08 compute-0 sudo[110757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbwutlgzzukcexbptkyypsbbrbgyzkty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296548.633549-52-85979308170033/AnsiballZ_setup.py'
Dec 09 16:09:08 compute-0 sudo[110757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:09 compute-0 python3.9[110759]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:09:09 compute-0 ceph-mon[75222]: 11.19 scrub starts
Dec 09 16:09:09 compute-0 ceph-mon[75222]: 11.19 scrub ok
Dec 09 16:09:09 compute-0 ceph-mon[75222]: pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:09 compute-0 sudo[110757]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:09 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:10 compute-0 ceph-mon[75222]: 2.13 scrub starts
Dec 09 16:09:10 compute-0 ceph-mon[75222]: 2.13 scrub ok
Dec 09 16:09:10 compute-0 sudo[110952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykuyznajebnonelgzjgqsipkaiobojz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296549.917161-63-193598028025311/AnsiballZ_file.py'
Dec 09 16:09:10 compute-0 sudo[110952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:10 compute-0 python3.9[110954]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:10 compute-0 sudo[110952]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:11 compute-0 sudo[111104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgdzaqmieyddlkchhuyxhyglkzyildyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296550.7701793-71-69365006293034/AnsiballZ_command.py'
Dec 09 16:09:11 compute-0 sudo[111104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:11 compute-0 ceph-mon[75222]: pgmap v293: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:11 compute-0 python3.9[111106]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:09:11 compute-0 sudo[111104]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:11 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 09 16:09:11 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 09 16:09:11 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:12 compute-0 sudo[111268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfhjbgxttdmusbzyvodhutoasurgudib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296551.7530563-79-91347879254710/AnsiballZ_stat.py'
Dec 09 16:09:12 compute-0 sudo[111268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:12 compute-0 python3.9[111270]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:12 compute-0 ceph-mon[75222]: 11.1f scrub starts
Dec 09 16:09:12 compute-0 ceph-mon[75222]: 11.1f scrub ok
Dec 09 16:09:12 compute-0 sudo[111268]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:12 compute-0 sudo[111346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhtyntmmklvlhstffuqzjfuxbsyiyacd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296551.7530563-79-91347879254710/AnsiballZ_file.py'
Dec 09 16:09:12 compute-0 sudo[111346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:12 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 09 16:09:12 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 09 16:09:12 compute-0 python3.9[111348]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:12 compute-0 sudo[111346]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:13 compute-0 sudo[111498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzduyxnmtljwsmhxnxafyboxsekthpea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296553.0450013-91-55558659889285/AnsiballZ_stat.py'
Dec 09 16:09:13 compute-0 sudo[111498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:13 compute-0 ceph-mon[75222]: pgmap v294: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:13 compute-0 ceph-mon[75222]: 8.1c scrub starts
Dec 09 16:09:13 compute-0 ceph-mon[75222]: 8.1c scrub ok
Dec 09 16:09:13 compute-0 python3.9[111500]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:13 compute-0 sudo[111498]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:13 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 09 16:09:13 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 09 16:09:13 compute-0 sudo[111576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tukefcmagpcdmidbonzqrxbnxabvvdbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296553.0450013-91-55558659889285/AnsiballZ_file.py'
Dec 09 16:09:13 compute-0 sudo[111576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:13 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:13 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 09 16:09:13 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 09 16:09:14 compute-0 python3.9[111578]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:09:14 compute-0 sudo[111576]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:14 compute-0 ceph-mon[75222]: 4.13 scrub starts
Dec 09 16:09:14 compute-0 ceph-mon[75222]: 4.13 scrub ok
Dec 09 16:09:14 compute-0 ceph-mon[75222]: 3.12 scrub starts
Dec 09 16:09:14 compute-0 ceph-mon[75222]: 3.12 scrub ok
Dec 09 16:09:14 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 09 16:09:14 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 09 16:09:14 compute-0 sudo[111728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqyznzqmbmpygolxexhxpaviispfzvyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296554.2752857-104-74545017227090/AnsiballZ_ini_file.py'
Dec 09 16:09:14 compute-0 sudo[111728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:14 compute-0 python3.9[111730]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:09:14 compute-0 sudo[111728]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:15 compute-0 sudo[111880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsnerscxkaviieetrzaiqxqkugkdilrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296555.1224859-104-39360776939545/AnsiballZ_ini_file.py'
Dec 09 16:09:15 compute-0 sudo[111880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:15 compute-0 ceph-mon[75222]: pgmap v295: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:15 compute-0 ceph-mon[75222]: 4.11 scrub starts
Dec 09 16:09:15 compute-0 ceph-mon[75222]: 4.11 scrub ok
Dec 09 16:09:15 compute-0 python3.9[111882]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:09:15 compute-0 sudo[111880]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:15 compute-0 sudo[112032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezuxlrfsvbplywsgjjxrcxymmhkfwpkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296555.6969588-104-210013534459838/AnsiballZ_ini_file.py'
Dec 09 16:09:15 compute-0 sudo[112032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:15 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:16 compute-0 python3.9[112034]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:09:16 compute-0 sudo[112032]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:16 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 09 16:09:16 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 09 16:09:16 compute-0 sudo[112184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgdnnhkznjovkzfzsiylmqqvguooklqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296556.2540152-104-49709229569810/AnsiballZ_ini_file.py'
Dec 09 16:09:16 compute-0 sudo[112184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:16 compute-0 python3.9[112186]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:09:16 compute-0 sudo[112184]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:17 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 09 16:09:17 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 09 16:09:17 compute-0 sudo[112336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exaxibslgvribuzssthjsstsvnwdptow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296556.983282-135-121932251473294/AnsiballZ_dnf.py'
Dec 09 16:09:17 compute-0 sudo[112336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:17 compute-0 ceph-mon[75222]: pgmap v296: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:17 compute-0 ceph-mon[75222]: 9.14 scrub starts
Dec 09 16:09:17 compute-0 ceph-mon[75222]: 9.14 scrub ok
Dec 09 16:09:17 compute-0 ceph-mon[75222]: 8.1a scrub starts
Dec 09 16:09:17 compute-0 ceph-mon[75222]: 8.1a scrub ok
Dec 09 16:09:17 compute-0 python3.9[112338]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:09:17 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:18 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 09 16:09:18 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 09 16:09:18 compute-0 ceph-mon[75222]: 2.11 scrub starts
Dec 09 16:09:18 compute-0 ceph-mon[75222]: 2.11 scrub ok
Dec 09 16:09:18 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 09 16:09:18 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 09 16:09:18 compute-0 sudo[112336]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:19 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 09 16:09:19 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 09 16:09:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:19 compute-0 sudo[112489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuehrxomctcqsrlchtmxpufrpvfkvczf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296559.1670983-146-98097746789141/AnsiballZ_setup.py'
Dec 09 16:09:19 compute-0 sudo[112489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:19 compute-0 ceph-mon[75222]: pgmap v297: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:19 compute-0 ceph-mon[75222]: 8.12 scrub starts
Dec 09 16:09:19 compute-0 ceph-mon[75222]: 8.12 scrub ok
Dec 09 16:09:19 compute-0 ceph-mon[75222]: 10.16 scrub starts
Dec 09 16:09:19 compute-0 ceph-mon[75222]: 10.16 scrub ok
Dec 09 16:09:19 compute-0 python3.9[112491]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:09:19 compute-0 sudo[112489]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:19 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:20 compute-0 sudo[112643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuldgksdocjxwccqriuekkoafrkhqwyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296559.954231-154-225092409594973/AnsiballZ_stat.py'
Dec 09 16:09:20 compute-0 sudo[112643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:20 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 09 16:09:20 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 09 16:09:20 compute-0 python3.9[112645]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:09:20 compute-0 sudo[112643]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:21 compute-0 sudo[112795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqkxrysovtrzatqklojkrwifinfrktnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296560.7623048-163-111840225428350/AnsiballZ_stat.py'
Dec 09 16:09:21 compute-0 sudo[112795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:21 compute-0 python3.9[112797]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:09:21 compute-0 sudo[112795]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:21 compute-0 ceph-mon[75222]: pgmap v298: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:21 compute-0 ceph-mon[75222]: 9.10 scrub starts
Dec 09 16:09:21 compute-0 ceph-mon[75222]: 9.10 scrub ok
Dec 09 16:09:21 compute-0 sudo[112947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungdaabdhgetpzmyktaflqxciknkzdly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296561.5858622-173-266509355282698/AnsiballZ_command.py'
Dec 09 16:09:21 compute-0 sudo[112947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:21 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:22 compute-0 python3.9[112949]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:09:22 compute-0 sudo[112947]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:22 compute-0 ceph-mon[75222]: pgmap v299: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:22 compute-0 sudo[113100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypdlmzhjchlcecwestrokotcdsvlczt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296562.2849152-183-242798631418199/AnsiballZ_service_facts.py'
Dec 09 16:09:22 compute-0 sudo[113100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:22 compute-0 python3.9[113102]: ansible-service_facts Invoked
Dec 09 16:09:23 compute-0 network[113119]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 16:09:23 compute-0 network[113120]: 'network-scripts' will be removed from distribution in near future.
Dec 09 16:09:23 compute-0 network[113121]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 16:09:23 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 09 16:09:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 09 16:09:24 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 09 16:09:24 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 09 16:09:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:25 compute-0 ceph-mon[75222]: pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:25 compute-0 ceph-mon[75222]: 5.14 scrub starts
Dec 09 16:09:25 compute-0 ceph-mon[75222]: 5.14 scrub ok
Dec 09 16:09:25 compute-0 ceph-mon[75222]: 9.12 scrub starts
Dec 09 16:09:25 compute-0 ceph-mon[75222]: 9.12 scrub ok
Dec 09 16:09:25 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 09 16:09:25 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 09 16:09:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:09:25
Dec 09 16:09:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:09:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:09:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.data']
Dec 09 16:09:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:09:25 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:26 compute-0 ceph-mon[75222]: 9.0 scrub starts
Dec 09 16:09:26 compute-0 ceph-mon[75222]: 9.0 scrub ok
Dec 09 16:09:26 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 09 16:09:26 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:09:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:09:26 compute-0 sshd-session[113217]: Invalid user dspace from 146.190.31.45 port 56224
Dec 09 16:09:26 compute-0 sudo[113100]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:26 compute-0 sshd-session[113217]: Connection closed by invalid user dspace 146.190.31.45 port 56224 [preauth]
Dec 09 16:09:27 compute-0 ceph-mon[75222]: pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:27 compute-0 ceph-mon[75222]: 9.2 scrub starts
Dec 09 16:09:27 compute-0 ceph-mon[75222]: 9.2 scrub ok
Dec 09 16:09:27 compute-0 sudo[113406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idmtuwkxevlikpqzwcndodfexvdxnnml ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765296567.437462-198-89031662052421/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765296567.437462-198-89031662052421/args'
Dec 09 16:09:27 compute-0 sudo[113406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:27 compute-0 sudo[113406]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:27 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 09 16:09:27 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 09 16:09:27 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:28 compute-0 sudo[113573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hborzjftnuldssnpuxjyqscayhkedhmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296568.161397-209-149970423611578/AnsiballZ_dnf.py'
Dec 09 16:09:28 compute-0 sudo[113573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:28 compute-0 python3.9[113575]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:09:28 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 09 16:09:28 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 09 16:09:29 compute-0 ceph-mon[75222]: 10.9 scrub starts
Dec 09 16:09:29 compute-0 ceph-mon[75222]: 10.9 scrub ok
Dec 09 16:09:29 compute-0 ceph-mon[75222]: pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:29 compute-0 sudo[113573]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:29 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:30 compute-0 ceph-mon[75222]: 6.8 scrub starts
Dec 09 16:09:30 compute-0 ceph-mon[75222]: 6.8 scrub ok
Dec 09 16:09:30 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 09 16:09:30 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 09 16:09:30 compute-0 sudo[113726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdsmxdvgiovudibqvdzlrxslppoqvktr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296570.2306597-222-225166896737485/AnsiballZ_package_facts.py'
Dec 09 16:09:30 compute-0 sudo[113726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:30 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 09 16:09:30 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 09 16:09:31 compute-0 ceph-mon[75222]: pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.101374) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296571101569, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7234, "num_deletes": 251, "total_data_size": 9603219, "memory_usage": 9780656, "flush_reason": "Manual Compaction"}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296571154397, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7629222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7377, "table_properties": {"data_size": 7602112, "index_size": 17843, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 75967, "raw_average_key_size": 23, "raw_value_size": 7538896, "raw_average_value_size": 2307, "num_data_blocks": 783, "num_entries": 3267, "num_filter_entries": 3267, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296184, "oldest_key_time": 1765296184, "file_creation_time": 1765296571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 53055 microseconds, and 18827 cpu microseconds.
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.154456) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7629222 bytes OK
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.154475) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.155921) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.155934) EVENT_LOG_v1 {"time_micros": 1765296571155930, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.155963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9571806, prev total WAL file size 9571806, number of live WAL files 2.
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.157670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7450KB) 13(58KB) 8(1944B)]
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296571157836, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7691124, "oldest_snapshot_seqno": -1}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3093 keys, 7643964 bytes, temperature: kUnknown
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296571205034, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7643964, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7617291, "index_size": 17859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 74407, "raw_average_key_size": 24, "raw_value_size": 7555416, "raw_average_value_size": 2442, "num_data_blocks": 785, "num_entries": 3093, "num_filter_entries": 3093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765296571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.205251) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7643964 bytes
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.206587) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.7 rd, 161.7 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.3, 0.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3382, records dropped: 289 output_compression: NoCompression
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.206608) EVENT_LOG_v1 {"time_micros": 1765296571206597, "job": 4, "event": "compaction_finished", "compaction_time_micros": 47262, "compaction_time_cpu_micros": 19800, "output_level": 6, "num_output_files": 1, "total_output_size": 7643964, "num_input_records": 3382, "num_output_records": 3093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296571208105, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296571208162, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296571208201, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 09 16:09:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:09:31.157515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:09:31 compute-0 python3.9[113728]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 09 16:09:31 compute-0 sudo[113726]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:31 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:32 compute-0 ceph-mon[75222]: 6.f scrub starts
Dec 09 16:09:32 compute-0 ceph-mon[75222]: 6.f scrub ok
Dec 09 16:09:32 compute-0 ceph-mon[75222]: 10.15 scrub starts
Dec 09 16:09:32 compute-0 ceph-mon[75222]: 10.15 scrub ok
Dec 09 16:09:32 compute-0 sudo[113879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxyiakbkhphszklvvlskzoilrziuxtne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296571.8821197-232-48017789724100/AnsiballZ_stat.py'
Dec 09 16:09:32 compute-0 sudo[113879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:32 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 09 16:09:32 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 09 16:09:32 compute-0 python3.9[113881]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:32 compute-0 sudo[113879]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:32 compute-0 sudo[113957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmcjdazfuttgyofhrpgeerqbhnmdodrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296571.8821197-232-48017789724100/AnsiballZ_file.py'
Dec 09 16:09:32 compute-0 sudo[113957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:32 compute-0 python3.9[113959]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:32 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 09 16:09:32 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 09 16:09:32 compute-0 sudo[113957]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:33 compute-0 ceph-mon[75222]: pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:33 compute-0 ceph-mon[75222]: 9.a scrub starts
Dec 09 16:09:33 compute-0 ceph-mon[75222]: 9.a scrub ok
Dec 09 16:09:33 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 09 16:09:33 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 09 16:09:33 compute-0 sudo[114109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptjnalhnxawcbwwgawovuuwsxsgqmjay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296573.1420631-244-221655614458086/AnsiballZ_stat.py'
Dec 09 16:09:33 compute-0 sudo[114109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:33 compute-0 python3.9[114111]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:33 compute-0 sudo[114109]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:33 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 09 16:09:33 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 09 16:09:33 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:33 compute-0 sudo[114187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdssbajglceabuvqpgsstcemseyblcbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296573.1420631-244-221655614458086/AnsiballZ_file.py'
Dec 09 16:09:34 compute-0 sudo[114187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:34 compute-0 ceph-mon[75222]: 8.6 scrub starts
Dec 09 16:09:34 compute-0 ceph-mon[75222]: 8.6 scrub ok
Dec 09 16:09:34 compute-0 ceph-mon[75222]: 9.4 scrub starts
Dec 09 16:09:34 compute-0 ceph-mon[75222]: 9.4 scrub ok
Dec 09 16:09:34 compute-0 ceph-mon[75222]: 9.8 scrub starts
Dec 09 16:09:34 compute-0 ceph-mon[75222]: 9.8 scrub ok
Dec 09 16:09:34 compute-0 python3.9[114189]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:34 compute-0 sudo[114187]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:35 compute-0 ceph-mon[75222]: pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:35 compute-0 sudo[114339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yttsuvgojgodfnxlpnvploitorcvnxfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296574.7416978-262-82941303741061/AnsiballZ_lineinfile.py'
Dec 09 16:09:35 compute-0 sudo[114339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:35 compute-0 python3.9[114341]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:35 compute-0 sudo[114339]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:35 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:36 compute-0 sudo[114491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krmowvnyuvlejhpoekdhfmkhzmhhcjak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296575.9975421-277-126405984661470/AnsiballZ_setup.py'
Dec 09 16:09:36 compute-0 sudo[114491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:09:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:09:36 compute-0 python3.9[114493]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:09:36 compute-0 sudo[114491]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:37 compute-0 ceph-mon[75222]: pgmap v306: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:37 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 09 16:09:37 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 09 16:09:37 compute-0 sudo[114575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apfldsaaxmckvynvinjvoijjeenkpuco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296575.9975421-277-126405984661470/AnsiballZ_systemd.py'
Dec 09 16:09:37 compute-0 sudo[114575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:37 compute-0 sudo[114578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:09:37 compute-0 sudo[114578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:37 compute-0 sudo[114578]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:37 compute-0 python3.9[114577]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:09:37 compute-0 sudo[114603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:09:37 compute-0 sudo[114603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:37 compute-0 sudo[114575]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:37 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 09 16:09:37 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 09 16:09:37 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:38 compute-0 ceph-mon[75222]: 9.1a scrub starts
Dec 09 16:09:38 compute-0 ceph-mon[75222]: 9.1a scrub ok
Dec 09 16:09:38 compute-0 sshd-session[110062]: Connection closed by 192.168.122.30 port 50558
Dec 09 16:09:38 compute-0 sshd-session[110059]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:09:38 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Dec 09 16:09:38 compute-0 systemd[1]: session-38.scope: Consumed 24.567s CPU time.
Dec 09 16:09:38 compute-0 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Dec 09 16:09:38 compute-0 systemd-logind[786]: Removed session 38.
Dec 09 16:09:38 compute-0 sudo[114603]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:09:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:09:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:09:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:09:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:09:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:09:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:09:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:09:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:09:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:09:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:09:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:09:38 compute-0 sudo[114684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:09:38 compute-0 sudo[114684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:38 compute-0 sudo[114684]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:38 compute-0 sudo[114709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:09:38 compute-0 sudo[114709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:38 compute-0 podman[114746]: 2025-12-09 16:09:38.837339742 +0000 UTC m=+0.018746617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:09:38 compute-0 podman[114746]: 2025-12-09 16:09:38.950078176 +0000 UTC m=+0.131485031 container create 2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 16:09:38 compute-0 systemd[1]: Started libpod-conmon-2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81.scope.
Dec 09 16:09:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:09:39 compute-0 podman[114746]: 2025-12-09 16:09:39.033256089 +0000 UTC m=+0.214662964 container init 2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:09:39 compute-0 podman[114746]: 2025-12-09 16:09:39.042944601 +0000 UTC m=+0.224351486 container start 2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:09:39 compute-0 podman[114746]: 2025-12-09 16:09:39.046741012 +0000 UTC m=+0.228147857 container attach 2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:09:39 compute-0 bold_lamarr[114762]: 167 167
Dec 09 16:09:39 compute-0 systemd[1]: libpod-2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81.scope: Deactivated successfully.
Dec 09 16:09:39 compute-0 podman[114746]: 2025-12-09 16:09:39.04906497 +0000 UTC m=+0.230471825 container died 2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e1c324b187a9babb830c3e5eb9b73863c1d0a55ed221ee9311aa7009e377d33-merged.mount: Deactivated successfully.
Dec 09 16:09:39 compute-0 ceph-mon[75222]: 10.d scrub starts
Dec 09 16:09:39 compute-0 ceph-mon[75222]: 10.d scrub ok
Dec 09 16:09:39 compute-0 ceph-mon[75222]: pgmap v307: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:09:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:09:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:09:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:09:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:09:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:09:39 compute-0 podman[114746]: 2025-12-09 16:09:39.209114612 +0000 UTC m=+0.390521497 container remove 2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:09:39 compute-0 systemd[1]: libpod-conmon-2a70dfafa72e9fe4af84e113bba03508d9357cd50b01f7d14df0ea3329c8ab81.scope: Deactivated successfully.
Dec 09 16:09:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:39 compute-0 podman[114787]: 2025-12-09 16:09:39.449681911 +0000 UTC m=+0.076738077 container create d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bassi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:09:39 compute-0 systemd[1]: Started libpod-conmon-d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33.scope.
Dec 09 16:09:39 compute-0 podman[114787]: 2025-12-09 16:09:39.416217906 +0000 UTC m=+0.043274082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:09:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99e3fe9b7e6a0d17e36d44e828a6cadb74676406730ede10c26b5eeb210e45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99e3fe9b7e6a0d17e36d44e828a6cadb74676406730ede10c26b5eeb210e45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99e3fe9b7e6a0d17e36d44e828a6cadb74676406730ede10c26b5eeb210e45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99e3fe9b7e6a0d17e36d44e828a6cadb74676406730ede10c26b5eeb210e45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a99e3fe9b7e6a0d17e36d44e828a6cadb74676406730ede10c26b5eeb210e45/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:39 compute-0 podman[114787]: 2025-12-09 16:09:39.570604023 +0000 UTC m=+0.197660219 container init d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:09:39 compute-0 podman[114787]: 2025-12-09 16:09:39.582644544 +0000 UTC m=+0.209700710 container start d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:09:39 compute-0 podman[114787]: 2025-12-09 16:09:39.586844156 +0000 UTC m=+0.213900332 container attach d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bassi, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:09:39 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 09 16:09:39 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 09 16:09:39 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:40 compute-0 trusting_bassi[114803]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:09:40 compute-0 trusting_bassi[114803]: --> All data devices are unavailable
Dec 09 16:09:40 compute-0 systemd[1]: libpod-d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33.scope: Deactivated successfully.
Dec 09 16:09:40 compute-0 podman[114787]: 2025-12-09 16:09:40.137881789 +0000 UTC m=+0.764937935 container died d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bassi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a99e3fe9b7e6a0d17e36d44e828a6cadb74676406730ede10c26b5eeb210e45-merged.mount: Deactivated successfully.
Dec 09 16:09:40 compute-0 podman[114787]: 2025-12-09 16:09:40.192877501 +0000 UTC m=+0.819933667 container remove d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bassi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:09:40 compute-0 systemd[1]: libpod-conmon-d0c64c07cf966a84aff99dd4145b68e4b4433f9dd5f4629565b70779af750c33.scope: Deactivated successfully.
Dec 09 16:09:40 compute-0 ceph-mon[75222]: 9.e scrub starts
Dec 09 16:09:40 compute-0 ceph-mon[75222]: 9.e scrub ok
Dec 09 16:09:40 compute-0 sudo[114709]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:40 compute-0 sudo[114836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:09:40 compute-0 sudo[114836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:40 compute-0 sudo[114836]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:40 compute-0 sudo[114861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:09:40 compute-0 sudo[114861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:40 compute-0 podman[114898]: 2025-12-09 16:09:40.668495617 +0000 UTC m=+0.044267460 container create 47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:09:40 compute-0 systemd[1]: Started libpod-conmon-47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b.scope.
Dec 09 16:09:40 compute-0 podman[114898]: 2025-12-09 16:09:40.646832936 +0000 UTC m=+0.022604829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:09:40 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:09:40 compute-0 podman[114898]: 2025-12-09 16:09:40.762037562 +0000 UTC m=+0.137809415 container init 47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_solomon, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:09:40 compute-0 podman[114898]: 2025-12-09 16:09:40.770140508 +0000 UTC m=+0.145912341 container start 47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:09:40 compute-0 podman[114898]: 2025-12-09 16:09:40.773870577 +0000 UTC m=+0.149642440 container attach 47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_solomon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:09:40 compute-0 adoring_solomon[114914]: 167 167
Dec 09 16:09:40 compute-0 systemd[1]: libpod-47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b.scope: Deactivated successfully.
Dec 09 16:09:40 compute-0 podman[114898]: 2025-12-09 16:09:40.775934567 +0000 UTC m=+0.151706430 container died 47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d5e2a8b1d53e756049c6f443189e7ed247a545f16a68d76fafba0be5efdc8a4-merged.mount: Deactivated successfully.
Dec 09 16:09:40 compute-0 podman[114898]: 2025-12-09 16:09:40.815212931 +0000 UTC m=+0.190984764 container remove 47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:09:40 compute-0 systemd[1]: libpod-conmon-47aee94281b3cfe1001898fe68a59ecac37316089aabac7bf8d66eb62698833b.scope: Deactivated successfully.
Dec 09 16:09:40 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 09 16:09:40 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 09 16:09:40 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 09 16:09:40 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 09 16:09:41 compute-0 podman[114938]: 2025-12-09 16:09:41.000385766 +0000 UTC m=+0.048127313 container create c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:09:41 compute-0 systemd[1]: Started libpod-conmon-c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a.scope.
Dec 09 16:09:41 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bd850cdee5044a2237f99124cc960a4f6ec731cab0cdfa90613c3f52bec38d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:41 compute-0 podman[114938]: 2025-12-09 16:09:40.975632625 +0000 UTC m=+0.023374202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bd850cdee5044a2237f99124cc960a4f6ec731cab0cdfa90613c3f52bec38d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bd850cdee5044a2237f99124cc960a4f6ec731cab0cdfa90613c3f52bec38d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bd850cdee5044a2237f99124cc960a4f6ec731cab0cdfa90613c3f52bec38d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:41 compute-0 podman[114938]: 2025-12-09 16:09:41.088880103 +0000 UTC m=+0.136621720 container init c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:09:41 compute-0 podman[114938]: 2025-12-09 16:09:41.095892397 +0000 UTC m=+0.143633954 container start c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:09:41 compute-0 podman[114938]: 2025-12-09 16:09:41.099290236 +0000 UTC m=+0.147031823 container attach c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:09:41 compute-0 ceph-mon[75222]: pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:41 compute-0 ceph-mon[75222]: 9.17 scrub starts
Dec 09 16:09:41 compute-0 ceph-mon[75222]: 9.17 scrub ok
Dec 09 16:09:41 compute-0 hungry_bouman[114955]: {
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:     "0": [
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:         {
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "devices": [
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "/dev/loop3"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             ],
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_name": "ceph_lv0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_size": "21470642176",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "name": "ceph_lv0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "tags": {
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cluster_name": "ceph",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.crush_device_class": "",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.encrypted": "0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.objectstore": "bluestore",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osd_id": "0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.type": "block",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.vdo": "0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.with_tpm": "0"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             },
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "type": "block",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "vg_name": "ceph_vg0"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:         }
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:     ],
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:     "1": [
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:         {
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "devices": [
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "/dev/loop4"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             ],
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_name": "ceph_lv1",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_size": "21470642176",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "name": "ceph_lv1",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "tags": {
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cluster_name": "ceph",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.crush_device_class": "",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.encrypted": "0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.objectstore": "bluestore",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osd_id": "1",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.type": "block",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.vdo": "0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.with_tpm": "0"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             },
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "type": "block",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "vg_name": "ceph_vg1"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:         }
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:     ],
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:     "2": [
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:         {
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "devices": [
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "/dev/loop5"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             ],
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_name": "ceph_lv2",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_size": "21470642176",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "name": "ceph_lv2",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "tags": {
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.cluster_name": "ceph",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.crush_device_class": "",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.encrypted": "0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.objectstore": "bluestore",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osd_id": "2",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.type": "block",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.vdo": "0",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:                 "ceph.with_tpm": "0"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             },
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "type": "block",
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:             "vg_name": "ceph_vg2"
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:         }
Dec 09 16:09:41 compute-0 hungry_bouman[114955]:     ]
Dec 09 16:09:41 compute-0 hungry_bouman[114955]: }
Dec 09 16:09:41 compute-0 systemd[1]: libpod-c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a.scope: Deactivated successfully.
Dec 09 16:09:41 compute-0 podman[114938]: 2025-12-09 16:09:41.46429581 +0000 UTC m=+0.512037807 container died c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8bd850cdee5044a2237f99124cc960a4f6ec731cab0cdfa90613c3f52bec38d-merged.mount: Deactivated successfully.
Dec 09 16:09:41 compute-0 podman[114938]: 2025-12-09 16:09:41.513221365 +0000 UTC m=+0.560962952 container remove c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:09:41 compute-0 systemd[1]: libpod-conmon-c4ae17c1007e9e2e8dd7d2a72da86c2403f4c88a2b1edf92296749a590db037a.scope: Deactivated successfully.
Dec 09 16:09:41 compute-0 sudo[114861]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:41 compute-0 sudo[114977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:09:41 compute-0 sudo[114977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:41 compute-0 sudo[114977]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:41 compute-0 sudo[115002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:09:41 compute-0 sudo[115002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:41 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 09 16:09:41 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 09 16:09:41 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:42 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 09 16:09:42 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 09 16:09:42 compute-0 podman[115039]: 2025-12-09 16:09:42.044467331 +0000 UTC m=+0.036399871 container create a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:09:42 compute-0 systemd[1]: Started libpod-conmon-a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0.scope.
Dec 09 16:09:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:09:42 compute-0 podman[115039]: 2025-12-09 16:09:42.119346083 +0000 UTC m=+0.111278733 container init a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:09:42 compute-0 podman[115039]: 2025-12-09 16:09:42.026813587 +0000 UTC m=+0.018746157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:09:42 compute-0 podman[115039]: 2025-12-09 16:09:42.126646205 +0000 UTC m=+0.118578785 container start a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatterjee, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:09:42 compute-0 podman[115039]: 2025-12-09 16:09:42.130853258 +0000 UTC m=+0.122785798 container attach a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatterjee, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:09:42 compute-0 infallible_chatterjee[115056]: 167 167
Dec 09 16:09:42 compute-0 systemd[1]: libpod-a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0.scope: Deactivated successfully.
Dec 09 16:09:42 compute-0 podman[115039]: 2025-12-09 16:09:42.135699459 +0000 UTC m=+0.127632039 container died a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c23d944d94e7a53423194ff8421a2bb3c6aa725c6119ee2b9fabbaf00fd22cb0-merged.mount: Deactivated successfully.
Dec 09 16:09:42 compute-0 podman[115039]: 2025-12-09 16:09:42.181149453 +0000 UTC m=+0.173081993 container remove a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 09 16:09:42 compute-0 systemd[1]: libpod-conmon-a7c42516deb577bfed539a6ce79fbd4129d3d47c2424bc0ed95e537a945b2de0.scope: Deactivated successfully.
Dec 09 16:09:42 compute-0 ceph-mon[75222]: 8.f scrub starts
Dec 09 16:09:42 compute-0 ceph-mon[75222]: 8.f scrub ok
Dec 09 16:09:42 compute-0 ceph-mon[75222]: 9.f scrub starts
Dec 09 16:09:42 compute-0 ceph-mon[75222]: 9.f scrub ok
Dec 09 16:09:42 compute-0 podman[115079]: 2025-12-09 16:09:42.366437481 +0000 UTC m=+0.060005149 container create 273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:09:42 compute-0 systemd[1]: Started libpod-conmon-273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958.scope.
Dec 09 16:09:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2943e1ba5d280ed884e1b9e7812fe21ace81f826ea26426e5f5381f8910d42fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:42 compute-0 podman[115079]: 2025-12-09 16:09:42.349186949 +0000 UTC m=+0.042754637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2943e1ba5d280ed884e1b9e7812fe21ace81f826ea26426e5f5381f8910d42fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2943e1ba5d280ed884e1b9e7812fe21ace81f826ea26426e5f5381f8910d42fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2943e1ba5d280ed884e1b9e7812fe21ace81f826ea26426e5f5381f8910d42fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:09:42 compute-0 podman[115079]: 2025-12-09 16:09:42.453438836 +0000 UTC m=+0.147006534 container init 273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:09:42 compute-0 podman[115079]: 2025-12-09 16:09:42.461043097 +0000 UTC m=+0.154610775 container start 273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:09:42 compute-0 podman[115079]: 2025-12-09 16:09:42.465303601 +0000 UTC m=+0.158871319 container attach 273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:09:42 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 09 16:09:42 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 09 16:09:43 compute-0 lvm[115172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:09:43 compute-0 lvm[115174]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:09:43 compute-0 lvm[115174]: VG ceph_vg1 finished
Dec 09 16:09:43 compute-0 lvm[115172]: VG ceph_vg0 finished
Dec 09 16:09:43 compute-0 ceph-mon[75222]: pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:43 compute-0 ceph-mon[75222]: 10.e scrub starts
Dec 09 16:09:43 compute-0 ceph-mon[75222]: 10.e scrub ok
Dec 09 16:09:43 compute-0 lvm[115176]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:09:43 compute-0 lvm[115176]: VG ceph_vg2 finished
Dec 09 16:09:43 compute-0 hardcore_euclid[115095]: {}
Dec 09 16:09:43 compute-0 systemd[1]: libpod-273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958.scope: Deactivated successfully.
Dec 09 16:09:43 compute-0 systemd[1]: libpod-273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958.scope: Consumed 1.445s CPU time.
Dec 09 16:09:43 compute-0 podman[115079]: 2025-12-09 16:09:43.369342802 +0000 UTC m=+1.062910540 container died 273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2943e1ba5d280ed884e1b9e7812fe21ace81f826ea26426e5f5381f8910d42fb-merged.mount: Deactivated successfully.
Dec 09 16:09:43 compute-0 podman[115079]: 2025-12-09 16:09:43.421322222 +0000 UTC m=+1.114889890 container remove 273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euclid, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:09:43 compute-0 systemd[1]: libpod-conmon-273969e14acd0ce906b2b15969b3d723a7828d32309bd5d6026450558b435958.scope: Deactivated successfully.
Dec 09 16:09:43 compute-0 sudo[115002]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:09:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:09:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:09:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:09:43 compute-0 sudo[115192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:09:43 compute-0 sudo[115192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:09:43 compute-0 sudo[115192]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:43 compute-0 sshd-session[115217]: Accepted publickey for zuul from 192.168.122.30 port 34328 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:09:43 compute-0 systemd-logind[786]: New session 39 of user zuul.
Dec 09 16:09:43 compute-0 systemd[1]: Started Session 39 of User zuul.
Dec 09 16:09:43 compute-0 sshd-session[115217]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:09:43 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 09 16:09:43 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 09 16:09:43 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:44 compute-0 ceph-mon[75222]: 6.a scrub starts
Dec 09 16:09:44 compute-0 ceph-mon[75222]: 6.a scrub ok
Dec 09 16:09:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:09:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:09:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:44 compute-0 sudo[115370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiolbdcqtcwtnidwmyjefssxvixfpuyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296584.0287874-22-210463799494847/AnsiballZ_file.py'
Dec 09 16:09:44 compute-0 sudo[115370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:44 compute-0 python3.9[115372]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:44 compute-0 sudo[115370]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:44 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 09 16:09:44 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 09 16:09:45 compute-0 ceph-mon[75222]: 6.5 scrub starts
Dec 09 16:09:45 compute-0 ceph-mon[75222]: 6.5 scrub ok
Dec 09 16:09:45 compute-0 ceph-mon[75222]: pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:45 compute-0 sudo[115523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfppfazxpawpfuggxmjwykmxwbropbuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296584.9846022-34-82711519818043/AnsiballZ_stat.py'
Dec 09 16:09:45 compute-0 sudo[115523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:45 compute-0 python3.9[115525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:45 compute-0 sudo[115523]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:45 compute-0 sudo[115601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmwqmgpwjdujfvobggqepoxnhvlkicfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296584.9846022-34-82711519818043/AnsiballZ_file.py'
Dec 09 16:09:45 compute-0 sudo[115601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:45 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 09 16:09:45 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 09 16:09:45 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:46 compute-0 python3.9[115603]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:46 compute-0 sudo[115601]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:46 compute-0 ceph-mon[75222]: 6.9 scrub starts
Dec 09 16:09:46 compute-0 ceph-mon[75222]: 6.9 scrub ok
Dec 09 16:09:46 compute-0 sshd-session[115220]: Connection closed by 192.168.122.30 port 34328
Dec 09 16:09:46 compute-0 sshd-session[115217]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:09:46 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Dec 09 16:09:46 compute-0 systemd[1]: session-39.scope: Consumed 1.657s CPU time.
Dec 09 16:09:46 compute-0 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Dec 09 16:09:46 compute-0 systemd-logind[786]: Removed session 39.
Dec 09 16:09:46 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 09 16:09:46 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 09 16:09:47 compute-0 ceph-mon[75222]: 6.7 scrub starts
Dec 09 16:09:47 compute-0 ceph-mon[75222]: 6.7 scrub ok
Dec 09 16:09:47 compute-0 ceph-mon[75222]: pgmap v311: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:47 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 09 16:09:47 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 09 16:09:47 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:48 compute-0 ceph-mon[75222]: 6.3 scrub starts
Dec 09 16:09:48 compute-0 ceph-mon[75222]: 6.3 scrub ok
Dec 09 16:09:48 compute-0 ceph-mon[75222]: 9.c scrub starts
Dec 09 16:09:48 compute-0 ceph-mon[75222]: 9.c scrub ok
Dec 09 16:09:48 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 09 16:09:48 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 09 16:09:49 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 09 16:09:49 compute-0 ceph-osd[87055]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 09 16:09:49 compute-0 ceph-mon[75222]: pgmap v312: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:49 compute-0 ceph-mon[75222]: 9.7 scrub starts
Dec 09 16:09:49 compute-0 ceph-mon[75222]: 9.7 scrub ok
Dec 09 16:09:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 09 16:09:49 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 09 16:09:49 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:50 compute-0 ceph-mon[75222]: 9.1f scrub starts
Dec 09 16:09:50 compute-0 ceph-mon[75222]: 9.1f scrub ok
Dec 09 16:09:50 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 09 16:09:50 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 09 16:09:50 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 09 16:09:50 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 09 16:09:51 compute-0 ceph-mon[75222]: 6.0 scrub starts
Dec 09 16:09:51 compute-0 ceph-mon[75222]: 6.0 scrub ok
Dec 09 16:09:51 compute-0 ceph-mon[75222]: pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:51 compute-0 ceph-mon[75222]: 9.6 scrub starts
Dec 09 16:09:51 compute-0 ceph-mon[75222]: 9.6 scrub ok
Dec 09 16:09:51 compute-0 sshd-session[115628]: Accepted publickey for zuul from 192.168.122.30 port 56538 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:09:51 compute-0 systemd-logind[786]: New session 40 of user zuul.
Dec 09 16:09:51 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 09 16:09:51 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 09 16:09:51 compute-0 sshd-session[115628]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:09:51 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 09 16:09:51 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:52 compute-0 ceph-mon[75222]: 9.11 scrub starts
Dec 09 16:09:52 compute-0 ceph-mon[75222]: 9.11 scrub ok
Dec 09 16:09:52 compute-0 python3.9[115781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:09:53 compute-0 ceph-mon[75222]: 9.5 scrub starts
Dec 09 16:09:53 compute-0 ceph-mon[75222]: 9.5 scrub ok
Dec 09 16:09:53 compute-0 ceph-mon[75222]: pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:53 compute-0 sudo[115935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqkcwwpavktvhknxtljtpgtbxyooofu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296593.2415102-33-169060077246038/AnsiballZ_file.py'
Dec 09 16:09:53 compute-0 sudo[115935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:53 compute-0 python3.9[115937]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:53 compute-0 sudo[115935]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:53 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:54 compute-0 sudo[116110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cskkpufnguzfpmdyndezhkqepvkcqvcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296594.1020596-41-26973278796256/AnsiballZ_stat.py'
Dec 09 16:09:54 compute-0 sudo[116110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:54 compute-0 python3.9[116112]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:54 compute-0 sudo[116110]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:55 compute-0 sudo[116188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihvcwtsznwshnvrngidvejxfwuhvxssc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296594.1020596-41-26973278796256/AnsiballZ_file.py'
Dec 09 16:09:55 compute-0 sudo[116188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:55 compute-0 python3.9[116190]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.gy6aka3j recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:55 compute-0 sudo[116188]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:55 compute-0 ceph-mon[75222]: pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:55 compute-0 sudo[116340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmkcxmwtuhtblprkcvzvqtibjyvssdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296595.6745577-61-160993797739859/AnsiballZ_stat.py'
Dec 09 16:09:55 compute-0 sudo[116340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:55 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:56 compute-0 python3.9[116342]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:56 compute-0 sudo[116340]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:56 compute-0 sudo[116418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awgwrkhacetsvahhwxznelfsugcffnua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296595.6745577-61-160993797739859/AnsiballZ_file.py'
Dec 09 16:09:56 compute-0 sudo[116418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:09:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:09:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:09:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:09:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:09:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:09:56 compute-0 python3.9[116420]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.nsrc3eez recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:09:56 compute-0 sudo[116418]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:57 compute-0 sudo[116570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pybvazizypdorormwyoujmpkfplkgmwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296596.783404-74-133633439414529/AnsiballZ_file.py'
Dec 09 16:09:57 compute-0 sudo[116570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:57 compute-0 python3.9[116572]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:09:57 compute-0 sudo[116570]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:57 compute-0 ceph-mon[75222]: pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:57 compute-0 rsyslogd[1004]: imjournal from <np0005552052:sudo>: begin to drop messages due to rate-limiting
Dec 09 16:09:57 compute-0 sudo[116722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogidrdrjpsshhabzoxpgamuoxdhnbqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296597.4299564-82-278610343015253/AnsiballZ_stat.py'
Dec 09 16:09:57 compute-0 sudo[116722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:57 compute-0 python3.9[116724]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:57 compute-0 sudo[116722]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:57 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 09 16:09:57 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 09 16:09:57 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:57 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 09 16:09:58 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 09 16:09:58 compute-0 sudo[116800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyvmcmuallplbywaeycnsflkkamprqnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296597.4299564-82-278610343015253/AnsiballZ_file.py'
Dec 09 16:09:58 compute-0 sudo[116800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:58 compute-0 ceph-mon[75222]: 9.19 scrub starts
Dec 09 16:09:58 compute-0 ceph-mon[75222]: 9.19 scrub ok
Dec 09 16:09:58 compute-0 python3.9[116802]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:09:58 compute-0 sudo[116800]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:58 compute-0 sudo[116952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnqmxmxdtejrxnkayjprumidsvcqelfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296598.5028558-82-80542911957599/AnsiballZ_stat.py'
Dec 09 16:09:58 compute-0 sudo[116952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:58 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 09 16:09:58 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 09 16:09:58 compute-0 python3.9[116954]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:09:58 compute-0 sudo[116952]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:59 compute-0 sudo[117030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixreduzzwixmatrgdcypcqfycqnrgleu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296598.5028558-82-80542911957599/AnsiballZ_file.py'
Dec 09 16:09:59 compute-0 sudo[117030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:09:59 compute-0 ceph-mon[75222]: pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:09:59 compute-0 ceph-mon[75222]: 9.b scrub starts
Dec 09 16:09:59 compute-0 ceph-mon[75222]: 9.b scrub ok
Dec 09 16:09:59 compute-0 ceph-mon[75222]: 9.18 scrub starts
Dec 09 16:09:59 compute-0 ceph-mon[75222]: 9.18 scrub ok
Dec 09 16:09:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:09:59 compute-0 python3.9[117032]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:09:59 compute-0 sudo[117030]: pam_unix(sudo:session): session closed for user root
Dec 09 16:09:59 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 09 16:09:59 compute-0 ceph-osd[88099]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 09 16:09:59 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:00 compute-0 sudo[117182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqrteevhggeylntiamzzuppicszanscx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296599.5772846-105-220510235806658/AnsiballZ_file.py'
Dec 09 16:10:00 compute-0 sudo[117182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:00 compute-0 python3.9[117184]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:00 compute-0 sudo[117182]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:00 compute-0 ceph-mon[75222]: 9.13 scrub starts
Dec 09 16:10:00 compute-0 ceph-mon[75222]: 9.13 scrub ok
Dec 09 16:10:00 compute-0 sudo[117334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtirnjmcqhpzwdqtfieifcjqdlpkzwzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296600.3841717-113-164493335469793/AnsiballZ_stat.py'
Dec 09 16:10:00 compute-0 sudo[117334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:00 compute-0 python3.9[117336]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:00 compute-0 sudo[117334]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:01 compute-0 sudo[117412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmjhczpzuqneaexthhgyydunpdybgjwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296600.3841717-113-164493335469793/AnsiballZ_file.py'
Dec 09 16:10:01 compute-0 sudo[117412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:01 compute-0 python3.9[117414]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:01 compute-0 ceph-mon[75222]: pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:01 compute-0 sudo[117412]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:01 compute-0 sudo[117564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewlolqucvwovsrmjmphtecnxejfteur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296601.5337462-125-88556929341723/AnsiballZ_stat.py'
Dec 09 16:10:01 compute-0 sudo[117564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:01 compute-0 python3.9[117566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:01 compute-0 sudo[117564]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:01 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:02 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 09 16:10:02 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 09 16:10:02 compute-0 sudo[117642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fctkkxfypplurofbyoeoznlqntmgjlvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296601.5337462-125-88556929341723/AnsiballZ_file.py'
Dec 09 16:10:02 compute-0 sudo[117642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:02 compute-0 python3.9[117644]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:02 compute-0 sudo[117642]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:03 compute-0 sudo[117794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oebonrlnoaitbmktyveqoefoqpsscyyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296602.6048198-137-100204264301391/AnsiballZ_systemd.py'
Dec 09 16:10:03 compute-0 sudo[117794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:03 compute-0 ceph-mon[75222]: pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:03 compute-0 ceph-mon[75222]: 9.16 scrub starts
Dec 09 16:10:03 compute-0 ceph-mon[75222]: 9.16 scrub ok
Dec 09 16:10:03 compute-0 python3.9[117796]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:10:03 compute-0 systemd[1]: Reloading.
Dec 09 16:10:03 compute-0 systemd-rc-local-generator[117819]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:10:03 compute-0 systemd-sysv-generator[117825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:10:03 compute-0 sudo[117794]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:03 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:04 compute-0 sudo[117983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktsbefyjqxxfvrgqmtbvztzrjqepkrza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296604.037634-145-86130595977220/AnsiballZ_stat.py'
Dec 09 16:10:04 compute-0 sudo[117983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:04 compute-0 python3.9[117985]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:04 compute-0 sudo[117983]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:04 compute-0 sudo[118061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwfqpkmjsmpsbdutnfkpriynbqczdxfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296604.037634-145-86130595977220/AnsiballZ_file.py'
Dec 09 16:10:04 compute-0 sudo[118061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:04 compute-0 python3.9[118063]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:04 compute-0 sudo[118061]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:05 compute-0 sudo[118213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnttxjdvedwjgbbotisfzzjbpfrdqcha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296605.1099298-157-109996542745040/AnsiballZ_stat.py'
Dec 09 16:10:05 compute-0 sudo[118213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:05 compute-0 ceph-mon[75222]: pgmap v320: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:05 compute-0 python3.9[118215]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:05 compute-0 sudo[118213]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:05 compute-0 sudo[118291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcujyazcmzanrwtkuumvuldqvlbnfdwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296605.1099298-157-109996542745040/AnsiballZ_file.py'
Dec 09 16:10:05 compute-0 sudo[118291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:05 compute-0 python3.9[118293]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:05 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:05 compute-0 sudo[118291]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:06 compute-0 sudo[118443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voiscpeyuwutfrncvvjqeyhfrigksyei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296606.175273-169-65943114724272/AnsiballZ_systemd.py'
Dec 09 16:10:06 compute-0 sudo[118443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:06 compute-0 python3.9[118445]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:10:06 compute-0 systemd[1]: Reloading.
Dec 09 16:10:06 compute-0 systemd-rc-local-generator[118473]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:10:06 compute-0 systemd-sysv-generator[118476]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:10:07 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 09 16:10:07 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 09 16:10:07 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 16:10:07 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 16:10:07 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 16:10:07 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 16:10:07 compute-0 sudo[118443]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:07 compute-0 ceph-mon[75222]: pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:07 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:08 compute-0 python3.9[118636]: ansible-ansible.builtin.service_facts Invoked
Dec 09 16:10:08 compute-0 network[118653]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 16:10:08 compute-0 network[118654]: 'network-scripts' will be removed from distribution in near future.
Dec 09 16:10:08 compute-0 network[118655]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 16:10:08 compute-0 ceph-mon[75222]: 9.9 scrub starts
Dec 09 16:10:08 compute-0 ceph-mon[75222]: 9.9 scrub ok
Dec 09 16:10:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:09 compute-0 ceph-mon[75222]: pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:09 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:11 compute-0 ceph-mon[75222]: pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:11 compute-0 sudo[118917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qezctmknqpdfdoxrkizyogyrmcgojbhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296611.4395113-195-92493945703162/AnsiballZ_stat.py'
Dec 09 16:10:11 compute-0 sudo[118917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:11 compute-0 python3.9[118919]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:11 compute-0 sshd-session[118854]: Invalid user dspace from 146.190.31.45 port 40450
Dec 09 16:10:11 compute-0 sudo[118917]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:11 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:12 compute-0 sshd-session[118854]: Connection closed by invalid user dspace 146.190.31.45 port 40450 [preauth]
Dec 09 16:10:12 compute-0 sudo[118995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbdxgfyihwcmdeooxziwwivwnxlxouqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296611.4395113-195-92493945703162/AnsiballZ_file.py'
Dec 09 16:10:12 compute-0 sudo[118995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:12 compute-0 python3.9[118997]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:12 compute-0 sudo[118995]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:12 compute-0 sudo[119147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdslulhxugaszygwdjgbhgxsmypwkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296612.6255703-208-63277668203634/AnsiballZ_file.py'
Dec 09 16:10:12 compute-0 sudo[119147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:13 compute-0 python3.9[119149]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:13 compute-0 sudo[119147]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:13 compute-0 ceph-mon[75222]: pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:13 compute-0 sudo[119299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tylbnmplpbtyakjzfyepdhznizvgogbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296613.2355192-216-102700377283822/AnsiballZ_stat.py'
Dec 09 16:10:13 compute-0 sudo[119299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:13 compute-0 python3.9[119301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:13 compute-0 sudo[119299]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:13 compute-0 sudo[119377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zydfqskittavaltljamblprvlyzgnlaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296613.2355192-216-102700377283822/AnsiballZ_file.py'
Dec 09 16:10:13 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 09 16:10:13 compute-0 sudo[119377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:13 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 09 16:10:13 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:14 compute-0 python3.9[119379]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:14 compute-0 sudo[119377]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:14 compute-0 ceph-mon[75222]: 9.d scrub starts
Dec 09 16:10:14 compute-0 ceph-mon[75222]: 9.d scrub ok
Dec 09 16:10:14 compute-0 sudo[119529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvdvtphgtosbnfaphbumoqggbzptdviz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296614.3997042-231-155693429162521/AnsiballZ_timezone.py'
Dec 09 16:10:14 compute-0 sudo[119529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:14 compute-0 python3.9[119531]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 09 16:10:15 compute-0 systemd[1]: Starting Time & Date Service...
Dec 09 16:10:15 compute-0 systemd[1]: Started Time & Date Service.
Dec 09 16:10:15 compute-0 sudo[119529]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:15 compute-0 sudo[119685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sstnzrhbheddhsdixfstqhwnvhpaqcrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296615.3899543-240-158663613455546/AnsiballZ_file.py'
Dec 09 16:10:15 compute-0 sudo[119685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:15 compute-0 python3.9[119687]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:15 compute-0 sudo[119685]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:15 compute-0 ceph-mon[75222]: pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:16 compute-0 sudo[119837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amfhmxtafqrxrmpnbanvxqvplbbnankp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296616.1140614-248-46177238768373/AnsiballZ_stat.py'
Dec 09 16:10:16 compute-0 sudo[119837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:16 compute-0 python3.9[119839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:16 compute-0 sudo[119837]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:16 compute-0 sudo[119915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nekhtfbtbwzqalkcphixwxkxzawvtbph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296616.1140614-248-46177238768373/AnsiballZ_file.py'
Dec 09 16:10:16 compute-0 sudo[119915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:16 compute-0 ceph-mon[75222]: pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:17 compute-0 python3.9[119917]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:17 compute-0 sudo[119915]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:17 compute-0 sudo[120067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzuwbihupejxcpoizaxyhwuyebbtntvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296617.2150323-260-252889306240599/AnsiballZ_stat.py'
Dec 09 16:10:17 compute-0 sudo[120067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:17 compute-0 python3.9[120069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:17 compute-0 sudo[120067]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:17 compute-0 sudo[120145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvlzjernoikspnwlzuoxflqccbafanwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296617.2150323-260-252889306240599/AnsiballZ_file.py'
Dec 09 16:10:17 compute-0 sudo[120145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:18 compute-0 python3.9[120147]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.i4a4rn08 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:18 compute-0 sudo[120145]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:18 compute-0 sudo[120297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkguiqfwfkjtctqftgntqvnnzmgiitq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296618.2719095-272-103467334011416/AnsiballZ_stat.py'
Dec 09 16:10:18 compute-0 sudo[120297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:18 compute-0 python3.9[120299]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:18 compute-0 sudo[120297]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:18 compute-0 sudo[120375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-himsdbuozfbbjkiumfsnropuhjiiujcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296618.2719095-272-103467334011416/AnsiballZ_file.py'
Dec 09 16:10:18 compute-0 sudo[120375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:19 compute-0 ceph-mon[75222]: pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:19 compute-0 python3.9[120377]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:19 compute-0 sudo[120375]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:19 compute-0 sudo[120527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybtjzsbuctqmfjqpvhrhxmfqlhfgtjzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296619.4297495-285-89779666188765/AnsiballZ_command.py'
Dec 09 16:10:19 compute-0 sudo[120527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:20 compute-0 python3.9[120529]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:10:20 compute-0 sudo[120527]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:20 compute-0 sudo[120680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrjlpccxajkzfgsvpwoymobgcclqokhn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765296620.2265167-293-16052722757768/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 16:10:20 compute-0 sudo[120680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:20 compute-0 python3[120682]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 16:10:20 compute-0 sudo[120680]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:20 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 09 16:10:20 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 09 16:10:21 compute-0 sudo[120832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdfxmpfylwqgtyubtrvwctdqzfhboaoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296621.0505815-301-78670933357558/AnsiballZ_stat.py'
Dec 09 16:10:21 compute-0 sudo[120832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:21 compute-0 ceph-mon[75222]: pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:21 compute-0 python3.9[120834]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:21 compute-0 sudo[120832]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:21 compute-0 sudo[120910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phrbnpvpyjqvnshcaynhlhfsatttdmia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296621.0505815-301-78670933357558/AnsiballZ_file.py'
Dec 09 16:10:21 compute-0 sudo[120910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:22 compute-0 python3.9[120912]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:22 compute-0 sudo[120910]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:22 compute-0 sudo[121062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufcficnhdyhbsprglbzmuypwincrjcfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296622.2291653-313-168219437290206/AnsiballZ_stat.py'
Dec 09 16:10:22 compute-0 sudo[121062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:22 compute-0 ceph-mon[75222]: 9.1 scrub starts
Dec 09 16:10:22 compute-0 ceph-mon[75222]: 9.1 scrub ok
Dec 09 16:10:22 compute-0 python3.9[121064]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:22 compute-0 sudo[121062]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:22 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 09 16:10:22 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 09 16:10:23 compute-0 sudo[121140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aadmxniamkznojdbndxermlzjyalkzth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296622.2291653-313-168219437290206/AnsiballZ_file.py'
Dec 09 16:10:23 compute-0 sudo[121140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:23 compute-0 python3.9[121142]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:23 compute-0 sudo[121140]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:23 compute-0 sudo[121292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txxkunutyrigbmbmzpekqimpusfluoqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296623.5080426-325-63992776333150/AnsiballZ_stat.py'
Dec 09 16:10:23 compute-0 sudo[121292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:23 compute-0 ceph-mon[75222]: pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:23 compute-0 ceph-mon[75222]: 9.3 scrub starts
Dec 09 16:10:23 compute-0 ceph-mon[75222]: 9.3 scrub ok
Dec 09 16:10:23 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 09 16:10:23 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 09 16:10:23 compute-0 python3.9[121294]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:24 compute-0 sudo[121292]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:24 compute-0 sudo[121370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhyydyedzhipcprmydxyfytyoaoxdsjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296623.5080426-325-63992776333150/AnsiballZ_file.py'
Dec 09 16:10:24 compute-0 sudo[121370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:24 compute-0 python3.9[121372]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:24 compute-0 sudo[121370]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:24 compute-0 ceph-mon[75222]: 9.1d scrub starts
Dec 09 16:10:24 compute-0 ceph-mon[75222]: 9.1d scrub ok
Dec 09 16:10:24 compute-0 ceph-mon[75222]: pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:24 compute-0 sudo[121522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvbmqdbsnfzmnegzjarvvksrbpclblrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296624.6935272-337-264223761417451/AnsiballZ_stat.py'
Dec 09 16:10:24 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 09 16:10:24 compute-0 sudo[121522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:25 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 09 16:10:25 compute-0 python3.9[121524]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:25 compute-0 sudo[121522]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:25 compute-0 sudo[121600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlbmqupkxomqdqgjtcpcgpymcbteontz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296624.6935272-337-264223761417451/AnsiballZ_file.py'
Dec 09 16:10:25 compute-0 sudo[121600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:25 compute-0 python3.9[121602]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:25 compute-0 sudo[121600]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:25 compute-0 ceph-mon[75222]: 9.1c scrub starts
Dec 09 16:10:25 compute-0 ceph-mon[75222]: 9.1c scrub ok
Dec 09 16:10:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:10:25
Dec 09 16:10:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:10:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:10:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 09 16:10:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:26 compute-0 sudo[121752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csmyfojzmgngvalxyiohhsdbzkidzeex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296625.7773147-349-181353439538123/AnsiballZ_stat.py'
Dec 09 16:10:26 compute-0 sudo[121752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:26 compute-0 python3.9[121754]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:26 compute-0 sudo[121752]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:10:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:10:26 compute-0 sudo[121830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veoqapuharwpwbzlcwnaalbzbjbareuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296625.7773147-349-181353439538123/AnsiballZ_file.py'
Dec 09 16:10:26 compute-0 sudo[121830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:26 compute-0 python3.9[121832]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:26 compute-0 ceph-mon[75222]: pgmap v331: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:26 compute-0 sudo[121830]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:27 compute-0 sudo[121982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxdmwmcslrglaxvgruuulrpzxduencsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296627.1232557-362-236958897335960/AnsiballZ_command.py'
Dec 09 16:10:27 compute-0 sudo[121982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:27 compute-0 python3.9[121984]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:10:27 compute-0 sudo[121982]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:28 compute-0 sudo[122137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnwgdcjszmufwmgtzkfsopikawxfntlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296627.8047073-370-210293636813919/AnsiballZ_blockinfile.py'
Dec 09 16:10:28 compute-0 sudo[122137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:28 compute-0 python3.9[122139]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:28 compute-0 sudo[122137]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:28 compute-0 sudo[122289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxmebindfzoihugkekhddmjhbitpjpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296628.717852-379-32151116291121/AnsiballZ_file.py'
Dec 09 16:10:28 compute-0 sudo[122289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:29 compute-0 ceph-mon[75222]: pgmap v332: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:29 compute-0 python3.9[122291]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:29 compute-0 sudo[122289]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:29 compute-0 sudo[122441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pljrclytakthbkgbldauqorkuhhkrahm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296629.3589258-379-77922677965944/AnsiballZ_file.py'
Dec 09 16:10:29 compute-0 sudo[122441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:29 compute-0 python3.9[122443]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:29 compute-0 sudo[122441]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:29 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 09 16:10:29 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 09 16:10:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:30 compute-0 sudo[122593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zznoyummiirxxwabczjmrhugrgqdhlhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296630.0158174-394-61314767625472/AnsiballZ_mount.py'
Dec 09 16:10:30 compute-0 sudo[122593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:30 compute-0 python3.9[122595]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 09 16:10:30 compute-0 sudo[122593]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:30 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 09 16:10:30 compute-0 ceph-osd[86013]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 09 16:10:31 compute-0 ceph-mon[75222]: 9.1b scrub starts
Dec 09 16:10:31 compute-0 ceph-mon[75222]: 9.1b scrub ok
Dec 09 16:10:31 compute-0 ceph-mon[75222]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:31 compute-0 sudo[122745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfskazykwfpnrqxuhxbdaimjcyljpmer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296630.851101-394-223119180132185/AnsiballZ_mount.py'
Dec 09 16:10:31 compute-0 sudo[122745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:31 compute-0 python3.9[122747]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 09 16:10:31 compute-0 sudo[122745]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:31 compute-0 sshd-session[115631]: Connection closed by 192.168.122.30 port 56538
Dec 09 16:10:31 compute-0 sshd-session[115628]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:10:31 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 09 16:10:31 compute-0 systemd[1]: session-40.scope: Consumed 29.606s CPU time.
Dec 09 16:10:31 compute-0 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Dec 09 16:10:31 compute-0 systemd-logind[786]: Removed session 40.
Dec 09 16:10:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:32 compute-0 ceph-mon[75222]: 9.1e scrub starts
Dec 09 16:10:32 compute-0 ceph-mon[75222]: 9.1e scrub ok
Dec 09 16:10:33 compute-0 ceph-mon[75222]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:35 compute-0 ceph-mon[75222]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:10:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:10:37 compute-0 ceph-mon[75222]: pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:37 compute-0 sshd-session[122772]: Accepted publickey for zuul from 192.168.122.30 port 45420 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:10:37 compute-0 systemd-logind[786]: New session 41 of user zuul.
Dec 09 16:10:37 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 09 16:10:37 compute-0 sshd-session[122772]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:10:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:38 compute-0 sudo[122925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulzeuoatjgbxubnrgpxxwnrgbqkvfyti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296637.5550354-16-39330361793458/AnsiballZ_tempfile.py'
Dec 09 16:10:38 compute-0 sudo[122925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:38 compute-0 python3.9[122927]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 09 16:10:38 compute-0 sudo[122925]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:38 compute-0 sudo[123077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfoprldkkyuujwvwfxtqrirqyybderj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296638.4105906-28-190144582304027/AnsiballZ_stat.py'
Dec 09 16:10:38 compute-0 sudo[123077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:39 compute-0 python3.9[123079]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:10:39 compute-0 sudo[123077]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:39 compute-0 ceph-mon[75222]: pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:39 compute-0 sudo[123231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zofmepgrzoacjvjptxdwkqlcgudiobui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296639.2382832-36-206409880086724/AnsiballZ_slurp.py'
Dec 09 16:10:39 compute-0 sudo[123231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:39 compute-0 python3.9[123233]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 09 16:10:39 compute-0 sudo[123231]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:40 compute-0 sudo[123383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yskppgtampcdvzxoinusvmnwyvsjozis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296640.0379493-44-43034724738072/AnsiballZ_stat.py'
Dec 09 16:10:40 compute-0 sudo[123383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:40 compute-0 python3.9[123385]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.weydf04d follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:10:40 compute-0 sudo[123383]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:41 compute-0 sudo[123508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjrmfueimbmnbtoooctvrziibuvpbnmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296640.0379493-44-43034724738072/AnsiballZ_copy.py'
Dec 09 16:10:41 compute-0 sudo[123508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:41 compute-0 ceph-mon[75222]: pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:41 compute-0 python3.9[123510]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.weydf04d mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296640.0379493-44-43034724738072/.source.weydf04d _original_basename=.ipp754f1 follow=False checksum=79da5806237539ec6e5bd48ba32a1fe3ca62f454 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:41 compute-0 sudo[123508]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:42 compute-0 sudo[123660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyfjgmoqfvqyqasekeuyzxxwgtbwtyli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296641.5147061-59-5222461280555/AnsiballZ_setup.py'
Dec 09 16:10:42 compute-0 sudo[123660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:42 compute-0 python3.9[123662]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:10:42 compute-0 sudo[123660]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:43 compute-0 sudo[123812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvtsqhllwqstutwjgvbbxehzcvpvdeec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296642.5960584-68-107223779716989/AnsiballZ_blockinfile.py'
Dec 09 16:10:43 compute-0 sudo[123812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:43 compute-0 python3.9[123814]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnS9g5EUNjTOnNw1/NVHuiTBWBT9IFzqVskWr/bH4K6HGCIi8LNq90yzJTZK561Sd/uYx7ignQywWdN6h7z4cZr9qv3Rg5CMqKNGI3Sg048aQjly0DCM+3Gz9Rv3pmAPxJFbJAQjjCgUNLfBLLfDxFUAkVAgqUg/ARpI5uIwxyZwq9vel6ajDd6a3tuXm0pB7Aj6McpSTsQ5wrCM2B1yntlCdJbxi44x5Jbq9kvLrDHHnx9KU1MNpbissJRJNAwoPoOuDssuzSqKOUX+6Ya3Nj7voprbIs+3BNo+8Aq8Q69gfCKZA7dqrB0bqiyI8Ydki6AsR/fharltZZhDNjNtKl88xiCZnFENR30EZcbzEMfhwMvvtAvW63JHSlTUovX71mnO5/nkm44DtIMgXcduA8NeG2zlteHjuKEdHzLWlQ1BrGKujEpNjVcM9Xz+i6EN3Gojh0+lwpBH+Pz//D/VDQLhqNVFYd7Tljzr7mpVR8zmnVp8H8lmLLeMbkom3Ni9M=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhYf7YoPese7yuteLiDPa2HkW82iyY0KjwCmBOU4Lns
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFdTc5nlDCIgIxXZgAZFe06AG23238fPcOdUL2uVInP9TXK6vel5Ou/ZAkkJ/5tJ5tAXqxYNejIamSGf87ZPE9s=
                                              create=True mode=0644 path=/tmp/ansible.weydf04d state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:43 compute-0 sudo[123812]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:43 compute-0 ceph-mon[75222]: pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:43 compute-0 sudo[123891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:10:43 compute-0 sudo[123891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:43 compute-0 sudo[123891]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:43 compute-0 sudo[123916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:10:43 compute-0 sudo[123916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:43 compute-0 sudo[124028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfpcdfmitptfzfeklhelpnqybhlfefoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296643.478147-76-157799374869035/AnsiballZ_command.py'
Dec 09 16:10:43 compute-0 sudo[124028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:44 compute-0 python3.9[124030]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.weydf04d' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:10:44 compute-0 sudo[124028]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:44 compute-0 sudo[123916]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:10:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:10:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:10:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:10:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:10:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:10:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:10:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:10:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:10:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:10:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:10:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:10:44 compute-0 sudo[124074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:10:44 compute-0 sudo[124074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:44 compute-0 sudo[124074]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:44 compute-0 sudo[124099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:10:44 compute-0 sudo[124099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:10:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:10:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:10:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:10:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:10:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:10:44 compute-0 podman[124188]: 2025-12-09 16:10:44.617738347 +0000 UTC m=+0.058548965 container create 22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:10:44 compute-0 systemd[1]: Started libpod-conmon-22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18.scope.
Dec 09 16:10:44 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:10:44 compute-0 podman[124188]: 2025-12-09 16:10:44.596641485 +0000 UTC m=+0.037452133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:10:44 compute-0 podman[124188]: 2025-12-09 16:10:44.706298955 +0000 UTC m=+0.147109633 container init 22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:10:44 compute-0 podman[124188]: 2025-12-09 16:10:44.714795795 +0000 UTC m=+0.155606433 container start 22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:10:44 compute-0 podman[124188]: 2025-12-09 16:10:44.718776402 +0000 UTC m=+0.159587070 container attach 22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_carver, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:10:44 compute-0 elegant_carver[124226]: 167 167
Dec 09 16:10:44 compute-0 systemd[1]: libpod-22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18.scope: Deactivated successfully.
Dec 09 16:10:44 compute-0 conmon[124226]: conmon 22f4ab37a5d4a5f8ffd7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18.scope/container/memory.events
Dec 09 16:10:44 compute-0 podman[124188]: 2025-12-09 16:10:44.723976275 +0000 UTC m=+0.164786903 container died 22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_carver, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d328a6cbe239b8397f23ca67799ff815138460b65710791282a5d5501f6aa5-merged.mount: Deactivated successfully.
Dec 09 16:10:44 compute-0 podman[124188]: 2025-12-09 16:10:44.77339122 +0000 UTC m=+0.214201848 container remove 22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_carver, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:10:44 compute-0 systemd[1]: libpod-conmon-22f4ab37a5d4a5f8ffd7193d6b27b43accb4a2e2b77b0ae5a2ce9d7c2333ab18.scope: Deactivated successfully.
Dec 09 16:10:44 compute-0 sudo[124293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxqdigcrcirisplevwoxgwhdtbicpllb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296644.3215861-84-13183443070776/AnsiballZ_file.py'
Dec 09 16:10:44 compute-0 sudo[124293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:44 compute-0 podman[124301]: 2025-12-09 16:10:44.967081464 +0000 UTC m=+0.043502742 container create 8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:10:45 compute-0 systemd[1]: Started libpod-conmon-8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f.scope.
Dec 09 16:10:45 compute-0 python3.9[124295]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.weydf04d state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaaa6cbdc029cc1ef30a3d2fd84be68d86e5840d4b23dd1b1ad0cc1e9ff4c91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaaa6cbdc029cc1ef30a3d2fd84be68d86e5840d4b23dd1b1ad0cc1e9ff4c91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaaa6cbdc029cc1ef30a3d2fd84be68d86e5840d4b23dd1b1ad0cc1e9ff4c91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaaa6cbdc029cc1ef30a3d2fd84be68d86e5840d4b23dd1b1ad0cc1e9ff4c91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaaa6cbdc029cc1ef30a3d2fd84be68d86e5840d4b23dd1b1ad0cc1e9ff4c91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:45 compute-0 sudo[124293]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:45 compute-0 podman[124301]: 2025-12-09 16:10:44.948439485 +0000 UTC m=+0.024860773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:10:45 compute-0 podman[124301]: 2025-12-09 16:10:45.053607662 +0000 UTC m=+0.130028940 container init 8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:10:45 compute-0 podman[124301]: 2025-12-09 16:10:45.06677972 +0000 UTC m=+0.143200998 container start 8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:10:45 compute-0 podman[124301]: 2025-12-09 16:10:45.070600212 +0000 UTC m=+0.147021510 container attach 8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:10:45 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 09 16:10:45 compute-0 sshd-session[122775]: Connection closed by 192.168.122.30 port 45420
Dec 09 16:10:45 compute-0 sshd-session[122772]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:10:45 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 09 16:10:45 compute-0 systemd[1]: session-41.scope: Consumed 5.018s CPU time.
Dec 09 16:10:45 compute-0 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Dec 09 16:10:45 compute-0 systemd-logind[786]: Removed session 41.
Dec 09 16:10:45 compute-0 ceph-mon[75222]: pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:45 compute-0 goofy_rosalind[124318]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:10:45 compute-0 goofy_rosalind[124318]: --> All data devices are unavailable
Dec 09 16:10:45 compute-0 systemd[1]: libpod-8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f.scope: Deactivated successfully.
Dec 09 16:10:45 compute-0 podman[124301]: 2025-12-09 16:10:45.553390378 +0000 UTC m=+0.629811656 container died 8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-acaaa6cbdc029cc1ef30a3d2fd84be68d86e5840d4b23dd1b1ad0cc1e9ff4c91-merged.mount: Deactivated successfully.
Dec 09 16:10:45 compute-0 podman[124301]: 2025-12-09 16:10:45.602889816 +0000 UTC m=+0.679311084 container remove 8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:45 compute-0 systemd[1]: libpod-conmon-8b91dc86192e555d1e908eee423d3c52037f9825aa8dd98d4260b337dd1ab03f.scope: Deactivated successfully.
Dec 09 16:10:45 compute-0 sudo[124099]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:45 compute-0 sudo[124376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:10:45 compute-0 sudo[124376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:45 compute-0 sudo[124376]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:45 compute-0 sudo[124401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:10:45 compute-0 sudo[124401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:46 compute-0 podman[124437]: 2025-12-09 16:10:46.069073734 +0000 UTC m=+0.061544693 container create 1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_black, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 16:10:46 compute-0 systemd[1]: Started libpod-conmon-1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d.scope.
Dec 09 16:10:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:10:46 compute-0 podman[124437]: 2025-12-09 16:10:46.043217363 +0000 UTC m=+0.035688382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:10:46 compute-0 podman[124437]: 2025-12-09 16:10:46.141378873 +0000 UTC m=+0.133849922 container init 1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_black, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:46 compute-0 podman[124437]: 2025-12-09 16:10:46.147279977 +0000 UTC m=+0.139750966 container start 1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_black, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:46 compute-0 podman[124437]: 2025-12-09 16:10:46.151320736 +0000 UTC m=+0.143791795 container attach 1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:10:46 compute-0 boring_black[124453]: 167 167
Dec 09 16:10:46 compute-0 systemd[1]: libpod-1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d.scope: Deactivated successfully.
Dec 09 16:10:46 compute-0 podman[124437]: 2025-12-09 16:10:46.154630653 +0000 UTC m=+0.147101642 container died 1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_black, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-374932f98cd04ac059470dfc5fe7ae1c2e032bc60edfc8bf2aa409e0a138d642-merged.mount: Deactivated successfully.
Dec 09 16:10:46 compute-0 podman[124437]: 2025-12-09 16:10:46.209012445 +0000 UTC m=+0.201483434 container remove 1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_black, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 16:10:46 compute-0 systemd[1]: libpod-conmon-1b87e41883cadfaf9fcb1c2af7578140064b88d0502b501a5b108cc38c058b2d.scope: Deactivated successfully.
Dec 09 16:10:46 compute-0 podman[124477]: 2025-12-09 16:10:46.442912713 +0000 UTC m=+0.067690925 container create cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_black, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:10:46 compute-0 systemd[1]: Started libpod-conmon-cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0.scope.
Dec 09 16:10:46 compute-0 podman[124477]: 2025-12-09 16:10:46.42107713 +0000 UTC m=+0.045855402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:10:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9f93316714451fde83291d7d9be643fbb7776ddc69619acf5fc820e4d50c95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9f93316714451fde83291d7d9be643fbb7776ddc69619acf5fc820e4d50c95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9f93316714451fde83291d7d9be643fbb7776ddc69619acf5fc820e4d50c95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9f93316714451fde83291d7d9be643fbb7776ddc69619acf5fc820e4d50c95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:46 compute-0 podman[124477]: 2025-12-09 16:10:46.527932636 +0000 UTC m=+0.152710848 container init cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_black, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:10:46 compute-0 podman[124477]: 2025-12-09 16:10:46.539880738 +0000 UTC m=+0.164658910 container start cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:10:46 compute-0 podman[124477]: 2025-12-09 16:10:46.543947718 +0000 UTC m=+0.168725900 container attach cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:46 compute-0 objective_black[124493]: {
Dec 09 16:10:46 compute-0 objective_black[124493]:     "0": [
Dec 09 16:10:46 compute-0 objective_black[124493]:         {
Dec 09 16:10:46 compute-0 objective_black[124493]:             "devices": [
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "/dev/loop3"
Dec 09 16:10:46 compute-0 objective_black[124493]:             ],
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_name": "ceph_lv0",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_size": "21470642176",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "name": "ceph_lv0",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "tags": {
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cluster_name": "ceph",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.crush_device_class": "",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.encrypted": "0",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.objectstore": "bluestore",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osd_id": "0",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.type": "block",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.vdo": "0",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.with_tpm": "0"
Dec 09 16:10:46 compute-0 objective_black[124493]:             },
Dec 09 16:10:46 compute-0 objective_black[124493]:             "type": "block",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "vg_name": "ceph_vg0"
Dec 09 16:10:46 compute-0 objective_black[124493]:         }
Dec 09 16:10:46 compute-0 objective_black[124493]:     ],
Dec 09 16:10:46 compute-0 objective_black[124493]:     "1": [
Dec 09 16:10:46 compute-0 objective_black[124493]:         {
Dec 09 16:10:46 compute-0 objective_black[124493]:             "devices": [
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "/dev/loop4"
Dec 09 16:10:46 compute-0 objective_black[124493]:             ],
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_name": "ceph_lv1",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_size": "21470642176",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "name": "ceph_lv1",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "tags": {
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cluster_name": "ceph",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.crush_device_class": "",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.encrypted": "0",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.objectstore": "bluestore",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osd_id": "1",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.type": "block",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.vdo": "0",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.with_tpm": "0"
Dec 09 16:10:46 compute-0 objective_black[124493]:             },
Dec 09 16:10:46 compute-0 objective_black[124493]:             "type": "block",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "vg_name": "ceph_vg1"
Dec 09 16:10:46 compute-0 objective_black[124493]:         }
Dec 09 16:10:46 compute-0 objective_black[124493]:     ],
Dec 09 16:10:46 compute-0 objective_black[124493]:     "2": [
Dec 09 16:10:46 compute-0 objective_black[124493]:         {
Dec 09 16:10:46 compute-0 objective_black[124493]:             "devices": [
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "/dev/loop5"
Dec 09 16:10:46 compute-0 objective_black[124493]:             ],
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_name": "ceph_lv2",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_size": "21470642176",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "name": "ceph_lv2",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "tags": {
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.cluster_name": "ceph",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.crush_device_class": "",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.encrypted": "0",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.objectstore": "bluestore",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osd_id": "2",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.type": "block",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.vdo": "0",
Dec 09 16:10:46 compute-0 objective_black[124493]:                 "ceph.with_tpm": "0"
Dec 09 16:10:46 compute-0 objective_black[124493]:             },
Dec 09 16:10:46 compute-0 objective_black[124493]:             "type": "block",
Dec 09 16:10:46 compute-0 objective_black[124493]:             "vg_name": "ceph_vg2"
Dec 09 16:10:46 compute-0 objective_black[124493]:         }
Dec 09 16:10:46 compute-0 objective_black[124493]:     ]
Dec 09 16:10:46 compute-0 objective_black[124493]: }
Dec 09 16:10:46 compute-0 systemd[1]: libpod-cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0.scope: Deactivated successfully.
Dec 09 16:10:46 compute-0 podman[124477]: 2025-12-09 16:10:46.860299964 +0000 UTC m=+0.485078156 container died cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_black, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb9f93316714451fde83291d7d9be643fbb7776ddc69619acf5fc820e4d50c95-merged.mount: Deactivated successfully.
Dec 09 16:10:46 compute-0 podman[124477]: 2025-12-09 16:10:46.915583902 +0000 UTC m=+0.540362084 container remove cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_black, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:10:46 compute-0 systemd[1]: libpod-conmon-cdb8c1513e7bd1fec5af19d83980d909a052d755f843b819c853330adb506ec0.scope: Deactivated successfully.
Dec 09 16:10:46 compute-0 sudo[124401]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:47 compute-0 sudo[124514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:10:47 compute-0 sudo[124514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:47 compute-0 sudo[124514]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:47 compute-0 sudo[124539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:10:47 compute-0 sudo[124539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:47 compute-0 podman[124576]: 2025-12-09 16:10:47.36312218 +0000 UTC m=+0.047711976 container create 656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 09 16:10:47 compute-0 systemd[1]: Started libpod-conmon-656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e.scope.
Dec 09 16:10:47 compute-0 ceph-mon[75222]: pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:10:47 compute-0 podman[124576]: 2025-12-09 16:10:47.343648277 +0000 UTC m=+0.028238083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:10:47 compute-0 podman[124576]: 2025-12-09 16:10:47.441637623 +0000 UTC m=+0.126227429 container init 656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:10:47 compute-0 podman[124576]: 2025-12-09 16:10:47.472365827 +0000 UTC m=+0.156955613 container start 656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:10:47 compute-0 amazing_stonebraker[124592]: 167 167
Dec 09 16:10:47 compute-0 podman[124576]: 2025-12-09 16:10:47.477025865 +0000 UTC m=+0.161615811 container attach 656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:10:47 compute-0 systemd[1]: libpod-656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e.scope: Deactivated successfully.
Dec 09 16:10:47 compute-0 conmon[124592]: conmon 656b65b3d29bc242e092 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e.scope/container/memory.events
Dec 09 16:10:47 compute-0 podman[124576]: 2025-12-09 16:10:47.478886249 +0000 UTC m=+0.163476045 container died 656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 16:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-94e4708435c6e5ba59456e6612f6f8574137f01726135789d6da677a4f613941-merged.mount: Deactivated successfully.
Dec 09 16:10:47 compute-0 podman[124576]: 2025-12-09 16:10:47.516538592 +0000 UTC m=+0.201128378 container remove 656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:10:47 compute-0 systemd[1]: libpod-conmon-656b65b3d29bc242e092f960eda87af90f85249b88b4ce32268548c47e6bc62e.scope: Deactivated successfully.
Dec 09 16:10:47 compute-0 podman[124614]: 2025-12-09 16:10:47.665588848 +0000 UTC m=+0.041295880 container create 3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:10:47 compute-0 systemd[1]: Started libpod-conmon-3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1.scope.
Dec 09 16:10:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc5e7886bd6cbd0b793c57d006fd75c02b7eab4b33eca7c9da128fb76afa3f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc5e7886bd6cbd0b793c57d006fd75c02b7eab4b33eca7c9da128fb76afa3f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc5e7886bd6cbd0b793c57d006fd75c02b7eab4b33eca7c9da128fb76afa3f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc5e7886bd6cbd0b793c57d006fd75c02b7eab4b33eca7c9da128fb76afa3f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:10:47 compute-0 podman[124614]: 2025-12-09 16:10:47.72712311 +0000 UTC m=+0.102830192 container init 3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bose, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:47 compute-0 podman[124614]: 2025-12-09 16:10:47.736691542 +0000 UTC m=+0.112398564 container start 3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:10:47 compute-0 podman[124614]: 2025-12-09 16:10:47.73990239 +0000 UTC m=+0.115609422 container attach 3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:10:47 compute-0 podman[124614]: 2025-12-09 16:10:47.649617891 +0000 UTC m=+0.025324943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:10:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:48 compute-0 lvm[124709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:10:48 compute-0 lvm[124710]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:10:48 compute-0 lvm[124709]: VG ceph_vg0 finished
Dec 09 16:10:48 compute-0 lvm[124710]: VG ceph_vg1 finished
Dec 09 16:10:48 compute-0 lvm[124712]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:10:48 compute-0 lvm[124712]: VG ceph_vg2 finished
Dec 09 16:10:48 compute-0 angry_bose[124631]: {}
Dec 09 16:10:48 compute-0 systemd[1]: libpod-3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1.scope: Deactivated successfully.
Dec 09 16:10:48 compute-0 podman[124614]: 2025-12-09 16:10:48.462176458 +0000 UTC m=+0.837883490 container died 3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 09 16:10:48 compute-0 systemd[1]: libpod-3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1.scope: Consumed 1.204s CPU time.
Dec 09 16:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bc5e7886bd6cbd0b793c57d006fd75c02b7eab4b33eca7c9da128fb76afa3f3-merged.mount: Deactivated successfully.
Dec 09 16:10:48 compute-0 podman[124614]: 2025-12-09 16:10:48.50797721 +0000 UTC m=+0.883684242 container remove 3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bose, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:10:48 compute-0 systemd[1]: libpod-conmon-3e08e3333a7a07f94a6fb08545d0c55c15d020a1fb37fb36e9bd34ccc8decca1.scope: Deactivated successfully.
Dec 09 16:10:48 compute-0 sudo[124539]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:10:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:10:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:10:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:10:48 compute-0 sudo[124728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:10:48 compute-0 sudo[124728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:10:48 compute-0 sudo[124728]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:49 compute-0 ceph-mon[75222]: pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:10:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:10:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:50 compute-0 sshd-session[124753]: Accepted publickey for zuul from 192.168.122.30 port 51868 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:10:50 compute-0 systemd-logind[786]: New session 42 of user zuul.
Dec 09 16:10:50 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 09 16:10:50 compute-0 sshd-session[124753]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:10:51 compute-0 ceph-mon[75222]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:51 compute-0 python3.9[124906]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:10:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:52 compute-0 sudo[125060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxjftidfvuxxcfytmuhdqqvddweyfno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296652.0247545-32-204653642565515/AnsiballZ_systemd.py'
Dec 09 16:10:52 compute-0 sudo[125060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:52 compute-0 python3.9[125062]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 09 16:10:52 compute-0 sudo[125060]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:53 compute-0 sudo[125214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zztkeyqiqvnlzpdkszsfkfqcktraizmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296653.183541-40-96772077273083/AnsiballZ_systemd.py'
Dec 09 16:10:53 compute-0 sudo[125214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:53 compute-0 ceph-mon[75222]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:53 compute-0 python3.9[125216]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:10:53 compute-0 sudo[125214]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:54 compute-0 sudo[125367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igrdatjnlkmrlffgmddxwukezustdhzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296654.1020837-49-251579445588251/AnsiballZ_command.py'
Dec 09 16:10:54 compute-0 sudo[125367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:54 compute-0 python3.9[125369]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:10:54 compute-0 sudo[125367]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:55 compute-0 sudo[125520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggzgjtufhupbkdcevnumwljlrkndmfws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296654.9591208-57-115770107046937/AnsiballZ_stat.py'
Dec 09 16:10:55 compute-0 sudo[125520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:55 compute-0 python3.9[125522]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:10:55 compute-0 sudo[125520]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:55 compute-0 ceph-mon[75222]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:56 compute-0 sudo[125674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzfymsgwfomvwmhxxbetaihdzwxdeom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296655.7619069-66-49407000753439/AnsiballZ_file.py'
Dec 09 16:10:56 compute-0 sudo[125674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:10:56 compute-0 sshd-session[125599]: Invalid user dspace from 146.190.31.45 port 54282
Dec 09 16:10:56 compute-0 sshd-session[125599]: Connection closed by invalid user dspace 146.190.31.45 port 54282 [preauth]
Dec 09 16:10:56 compute-0 python3.9[125676]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:10:56 compute-0 sudo[125674]: pam_unix(sudo:session): session closed for user root
Dec 09 16:10:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:10:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:10:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:10:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:10:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:10:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:10:56 compute-0 sshd-session[124756]: Connection closed by 192.168.122.30 port 51868
Dec 09 16:10:56 compute-0 sshd-session[124753]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:10:56 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 09 16:10:56 compute-0 systemd[1]: session-42.scope: Consumed 4.016s CPU time.
Dec 09 16:10:56 compute-0 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Dec 09 16:10:56 compute-0 systemd-logind[786]: Removed session 42.
Dec 09 16:10:57 compute-0 ceph-mon[75222]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:10:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:10:59 compute-0 ceph-mon[75222]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:01 compute-0 ceph-mon[75222]: pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:02 compute-0 sshd-session[71391]: Received disconnect from 38.102.83.236 port 40256:11: disconnected by user
Dec 09 16:11:02 compute-0 sshd-session[71391]: Disconnected from user zuul 38.102.83.236 port 40256
Dec 09 16:11:02 compute-0 sshd-session[71388]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:11:02 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 09 16:11:02 compute-0 systemd[1]: session-18.scope: Consumed 1min 41.515s CPU time.
Dec 09 16:11:02 compute-0 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Dec 09 16:11:02 compute-0 systemd-logind[786]: Removed session 18.
Dec 09 16:11:02 compute-0 ceph-mon[75222]: pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:03 compute-0 sshd-session[125701]: Accepted publickey for zuul from 192.168.122.30 port 53678 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:11:04 compute-0 systemd-logind[786]: New session 43 of user zuul.
Dec 09 16:11:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:04 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 09 16:11:04 compute-0 sshd-session[125701]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:11:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:05 compute-0 python3.9[125854]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:11:05 compute-0 ceph-mon[75222]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:05 compute-0 sudo[126008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcugcyhnknihrriciaaaoiujurxczjqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296665.5126226-34-54357110368780/AnsiballZ_setup.py'
Dec 09 16:11:05 compute-0 sudo[126008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:06 compute-0 python3.9[126010]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:11:06 compute-0 sudo[126008]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:06 compute-0 sudo[126092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itqqotyykuppzqfxdjpjyxzzstxnoios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296665.5126226-34-54357110368780/AnsiballZ_dnf.py'
Dec 09 16:11:06 compute-0 sudo[126092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:07 compute-0 python3.9[126094]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 16:11:07 compute-0 ceph-mon[75222]: pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:08 compute-0 sudo[126092]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:09 compute-0 ceph-mon[75222]: pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:09 compute-0 python3.9[126245]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:11:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:10 compute-0 python3.9[126396]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 16:11:11 compute-0 ceph-mon[75222]: pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:11 compute-0 python3.9[126546]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:11:11 compute-0 python3.9[126696]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:11:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:12 compute-0 sshd-session[125704]: Connection closed by 192.168.122.30 port 53678
Dec 09 16:11:12 compute-0 sshd-session[125701]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:11:12 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 09 16:11:12 compute-0 systemd[1]: session-43.scope: Consumed 5.917s CPU time.
Dec 09 16:11:12 compute-0 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Dec 09 16:11:12 compute-0 systemd-logind[786]: Removed session 43.
Dec 09 16:11:13 compute-0 ceph-mon[75222]: pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:15 compute-0 ceph-mon[75222]: pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:17 compute-0 ceph-mon[75222]: pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:18 compute-0 sshd-session[126721]: Accepted publickey for zuul from 192.168.122.30 port 53620 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:11:18 compute-0 systemd-logind[786]: New session 44 of user zuul.
Dec 09 16:11:18 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 09 16:11:18 compute-0 sshd-session[126721]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:11:19 compute-0 ceph-mon[75222]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:19 compute-0 python3.9[126874]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:11:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:20 compute-0 sudo[127028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olzrcfjznxcddccezhulafhvtbrqbuiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296680.4022954-50-208061903446514/AnsiballZ_file.py'
Dec 09 16:11:20 compute-0 sudo[127028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:21 compute-0 python3.9[127030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:21 compute-0 sudo[127028]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:21 compute-0 ceph-mon[75222]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:21 compute-0 sudo[127180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbzniwntthknguwsonlkokwztulhoupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296681.3108087-50-163660050777557/AnsiballZ_file.py'
Dec 09 16:11:21 compute-0 sudo[127180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:21 compute-0 python3.9[127182]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:21 compute-0 sudo[127180]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:22 compute-0 sudo[127332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwhzagsigqsxwkxgxzhywgrvatlgksxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296681.9600556-65-238837118573760/AnsiballZ_stat.py'
Dec 09 16:11:22 compute-0 sudo[127332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:22 compute-0 python3.9[127334]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:22 compute-0 sudo[127332]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:23 compute-0 sudo[127455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlocnzfnqreojexgsuwqysqlehctiliw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296681.9600556-65-238837118573760/AnsiballZ_copy.py'
Dec 09 16:11:23 compute-0 sudo[127455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:23 compute-0 ceph-mon[75222]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:23 compute-0 python3.9[127457]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296681.9600556-65-238837118573760/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0ffb705c7ebf5b43915eb9fae5dbe9ded6a86772 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:23 compute-0 sudo[127455]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:23 compute-0 sudo[127607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blhycrqhwepcsexnduspbadwbmznkuyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296683.464739-65-273323279361985/AnsiballZ_stat.py'
Dec 09 16:11:23 compute-0 sudo[127607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:23 compute-0 python3.9[127609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:23 compute-0 sudo[127607]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:24 compute-0 sudo[127730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brpaamqixgwxrtoncjvblxvpbuvlmjhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296683.464739-65-273323279361985/AnsiballZ_copy.py'
Dec 09 16:11:24 compute-0 sudo[127730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:24 compute-0 python3.9[127732]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296683.464739-65-273323279361985/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5dd9faa82fbaf7098a8e17b739d713c3d3b62272 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:24 compute-0 sudo[127730]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:24 compute-0 sudo[127882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjdxqgpzdvffvfrlxbynhfjsxjmkkskf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296684.6701274-65-128389494016653/AnsiballZ_stat.py'
Dec 09 16:11:24 compute-0 sudo[127882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:25 compute-0 python3.9[127884]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:25 compute-0 sudo[127882]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:25 compute-0 ceph-mon[75222]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:25 compute-0 sudo[128005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvpllakpukcuzrlzkgawhvatjiynldli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296684.6701274-65-128389494016653/AnsiballZ_copy.py'
Dec 09 16:11:25 compute-0 sudo[128005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:25 compute-0 python3.9[128007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296684.6701274-65-128389494016653/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=159bf952c81e1c3d9c2ca297462b29b6f4d1235b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:25 compute-0 sudo[128005]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:11:25
Dec 09 16:11:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:11:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:11:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control']
Dec 09 16:11:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:26 compute-0 sudo[128157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gptsenxenkvvhyqrtiakpywvnsxummnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296685.9431-109-185370403638386/AnsiballZ_file.py'
Dec 09 16:11:26 compute-0 sudo[128157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:26 compute-0 python3.9[128159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:26 compute-0 sudo[128157]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:11:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:11:26 compute-0 sudo[128309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhahokjsyfnqqlyvureospixavpvdonh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296686.5485995-109-188236596490750/AnsiballZ_file.py'
Dec 09 16:11:26 compute-0 sudo[128309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:27 compute-0 python3.9[128311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:27 compute-0 sudo[128309]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:27 compute-0 ceph-mon[75222]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:27 compute-0 sudo[128461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdsiolrlndouagdycmhblysdslmjmwsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296687.238009-124-158304877043326/AnsiballZ_stat.py'
Dec 09 16:11:27 compute-0 sudo[128461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:27 compute-0 python3.9[128463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:27 compute-0 sudo[128461]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:28 compute-0 sudo[128584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddjmfdussztwgvwovmqqdelrpkisuqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296687.238009-124-158304877043326/AnsiballZ_copy.py'
Dec 09 16:11:28 compute-0 sudo[128584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:28 compute-0 python3.9[128586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296687.238009-124-158304877043326/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5828dfd3b512ebb876b5dac1b71aa5a0d6009e57 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:28 compute-0 sudo[128584]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:28 compute-0 sudo[128736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzhufhjeiiqvnxetndteyirclvbvmymx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296688.5373378-124-257691479983228/AnsiballZ_stat.py'
Dec 09 16:11:28 compute-0 sudo[128736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:28 compute-0 python3.9[128738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:28 compute-0 sudo[128736]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:29 compute-0 ceph-mon[75222]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:29 compute-0 sudo[128859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfkduqzuydvypyndcuxdrtczpujnlhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296688.5373378-124-257691479983228/AnsiballZ_copy.py'
Dec 09 16:11:29 compute-0 sudo[128859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:29 compute-0 python3.9[128861]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296688.5373378-124-257691479983228/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=108812cbec890e2591f81f90f0669ae216444a46 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:29 compute-0 sudo[128859]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:29 compute-0 sudo[129011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqkckiisbiwgyonckvwwjuojooutgqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296689.6443667-124-175203610937412/AnsiballZ_stat.py'
Dec 09 16:11:29 compute-0 sudo[129011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:30 compute-0 python3.9[129013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:30 compute-0 sudo[129011]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:30 compute-0 rsyslogd[1004]: imjournal: 865 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 09 16:11:30 compute-0 sudo[129134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kblkkadotvefrnyizwsyfwwlxghevipl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296689.6443667-124-175203610937412/AnsiballZ_copy.py'
Dec 09 16:11:30 compute-0 sudo[129134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:30 compute-0 python3.9[129136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296689.6443667-124-175203610937412/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=77daaa3b5641f542d09493a85b75960ee54b1712 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:30 compute-0 sudo[129134]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:31 compute-0 sudo[129286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvvpdeinczgamnmnihbkcglvqxywqybk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296690.856088-168-77387551432609/AnsiballZ_file.py'
Dec 09 16:11:31 compute-0 sudo[129286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:31 compute-0 ceph-mon[75222]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:31 compute-0 python3.9[129288]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:31 compute-0 sudo[129286]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:31 compute-0 sudo[129438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwfjeoswuzkrmnctpuqgpvbslaknsqrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296691.4615676-168-3416165401079/AnsiballZ_file.py'
Dec 09 16:11:31 compute-0 sudo[129438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:31 compute-0 python3.9[129440]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:32 compute-0 sudo[129438]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:32 compute-0 sudo[129590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxmgnakzuxusapjpabcrethkclkomutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296692.1849608-183-148102172505945/AnsiballZ_stat.py'
Dec 09 16:11:32 compute-0 sudo[129590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:32 compute-0 python3.9[129592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:32 compute-0 sudo[129590]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:33 compute-0 sudo[129713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teekdnurebobdpisglipzfnjasewqdwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296692.1849608-183-148102172505945/AnsiballZ_copy.py'
Dec 09 16:11:33 compute-0 sudo[129713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:33 compute-0 python3.9[129715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296692.1849608-183-148102172505945/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=653b36fc2e2c51dc80d417f107a315c60fc41445 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:33 compute-0 ceph-mon[75222]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:33 compute-0 sudo[129713]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:33 compute-0 sudo[129865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfudgkxoreqrbenapbykwukaajqqsyhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296693.4551797-183-41544400624103/AnsiballZ_stat.py'
Dec 09 16:11:33 compute-0 sudo[129865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:33 compute-0 python3.9[129867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:33 compute-0 sudo[129865]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:34 compute-0 sudo[129988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xycgyppfegoknftwrpzfctyiibdkqjsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296693.4551797-183-41544400624103/AnsiballZ_copy.py'
Dec 09 16:11:34 compute-0 sudo[129988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:34 compute-0 python3.9[129990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296693.4551797-183-41544400624103/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=108812cbec890e2591f81f90f0669ae216444a46 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:34 compute-0 sudo[129988]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:35 compute-0 sudo[130140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygcxdbaeozvbolmynrnyhtscsegrectf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296694.75696-183-95148327505028/AnsiballZ_stat.py'
Dec 09 16:11:35 compute-0 sudo[130140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:35 compute-0 python3.9[130142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:35 compute-0 sudo[130140]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:35 compute-0 ceph-mon[75222]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:35 compute-0 sudo[130263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqabzfneqgbthadsbgzhbujgtbjbqdoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296694.75696-183-95148327505028/AnsiballZ_copy.py'
Dec 09 16:11:35 compute-0 sudo[130263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:35 compute-0 python3.9[130265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296694.75696-183-95148327505028/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3d75dd2eb8b09a5ee0dc873594ca07464daabd1c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:35 compute-0 sudo[130263]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:11:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:11:36 compute-0 sudo[130415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucpkwckodtdeicgkkhagelydsmrmbolj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296696.6489-243-101786390770851/AnsiballZ_file.py'
Dec 09 16:11:36 compute-0 sudo[130415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:37 compute-0 python3.9[130417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:37 compute-0 sudo[130415]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:37 compute-0 ceph-mon[75222]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:37 compute-0 sudo[130567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzcmfxcraoatsigjuobcumvpumemsmxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296697.3444526-251-182111103634618/AnsiballZ_stat.py'
Dec 09 16:11:37 compute-0 sudo[130567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:37 compute-0 python3.9[130569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:37 compute-0 sudo[130567]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:38 compute-0 sudo[130690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lysbbghxqwfoizcsdwqjfegykkhsrnqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296697.3444526-251-182111103634618/AnsiballZ_copy.py'
Dec 09 16:11:38 compute-0 sudo[130690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:38 compute-0 python3.9[130692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296697.3444526-251-182111103634618/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=00e758ddc55a34ae8ccf237d115f45aaa7d998db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:38 compute-0 sudo[130690]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:39 compute-0 sudo[130842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwbgwhvxessphgdloaujjfqbnddetwia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296698.7119236-267-169942758913055/AnsiballZ_file.py'
Dec 09 16:11:39 compute-0 sudo[130842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:39 compute-0 python3.9[130844]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:39 compute-0 sudo[130842]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:39 compute-0 ceph-mon[75222]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:39 compute-0 sudo[130996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsxofltstrolvwwycmhmwrijlqqffzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296699.3989313-275-127893236392084/AnsiballZ_stat.py'
Dec 09 16:11:39 compute-0 sudo[130996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:39 compute-0 sshd-session[130892]: Invalid user dspace from 146.190.31.45 port 41296
Dec 09 16:11:39 compute-0 sshd-session[130892]: Connection closed by invalid user dspace 146.190.31.45 port 41296 [preauth]
Dec 09 16:11:39 compute-0 python3.9[130998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:39 compute-0 sudo[130996]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:40 compute-0 sudo[131119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bewrxtwrujezcrffiwskvssillblpmmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296699.3989313-275-127893236392084/AnsiballZ_copy.py'
Dec 09 16:11:40 compute-0 sudo[131119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:40 compute-0 python3.9[131121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296699.3989313-275-127893236392084/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=00e758ddc55a34ae8ccf237d115f45aaa7d998db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:40 compute-0 sudo[131119]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:41 compute-0 sudo[131271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojardndrydhydzlhwylbedzvzebwinyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296700.726035-291-90798096475555/AnsiballZ_file.py'
Dec 09 16:11:41 compute-0 sudo[131271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:41 compute-0 python3.9[131273]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:41 compute-0 sudo[131271]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:41 compute-0 ceph-mon[75222]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:41 compute-0 sudo[131423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzmmgudsjnpjlutnfjaiwqkokryrntfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296701.5073645-299-159794823747349/AnsiballZ_stat.py'
Dec 09 16:11:41 compute-0 sudo[131423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:41 compute-0 python3.9[131425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:41 compute-0 sudo[131423]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:42 compute-0 sudo[131546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpcgcdzwllzykywkxilnpmbaqsbcfxqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296701.5073645-299-159794823747349/AnsiballZ_copy.py'
Dec 09 16:11:42 compute-0 sudo[131546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:42 compute-0 python3.9[131548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296701.5073645-299-159794823747349/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=00e758ddc55a34ae8ccf237d115f45aaa7d998db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:42 compute-0 sudo[131546]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:43 compute-0 sudo[131698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjppwurlseuukvflbtgwucgozxzpeaxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296702.8346858-315-154414106074161/AnsiballZ_file.py'
Dec 09 16:11:43 compute-0 sudo[131698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:43 compute-0 ceph-mon[75222]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:43 compute-0 python3.9[131700]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:43 compute-0 sudo[131698]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:43 compute-0 sudo[131850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpphiferfhssgycttonhyiqdiqsxfxdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296703.5164485-323-308945156799/AnsiballZ_stat.py'
Dec 09 16:11:43 compute-0 sudo[131850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:43 compute-0 python3.9[131852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:43 compute-0 sudo[131850]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:44 compute-0 sudo[131973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibmrednewvkbgxyapdqxenmftecrudej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296703.5164485-323-308945156799/AnsiballZ_copy.py'
Dec 09 16:11:44 compute-0 sudo[131973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:44 compute-0 python3.9[131975]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296703.5164485-323-308945156799/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=00e758ddc55a34ae8ccf237d115f45aaa7d998db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:44 compute-0 sudo[131973]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:45 compute-0 sudo[132125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqwyaqgpifegltcsiyjvybbtloijajks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296704.9864233-339-63275430483646/AnsiballZ_file.py'
Dec 09 16:11:45 compute-0 sudo[132125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:45 compute-0 ceph-mon[75222]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:45 compute-0 python3.9[132127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:45 compute-0 sudo[132125]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:46 compute-0 sudo[132277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ondkcgzxygqewjfpkmcushwlcgirqank ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296705.7231596-347-45956276756036/AnsiballZ_stat.py'
Dec 09 16:11:46 compute-0 sudo[132277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:46 compute-0 python3.9[132279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:46 compute-0 sudo[132277]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:46 compute-0 sudo[132400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adqpnjknwvwmpswcficxooawlomwhaii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296705.7231596-347-45956276756036/AnsiballZ_copy.py'
Dec 09 16:11:46 compute-0 sudo[132400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:46 compute-0 python3.9[132402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296705.7231596-347-45956276756036/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=00e758ddc55a34ae8ccf237d115f45aaa7d998db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:46 compute-0 sudo[132400]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:47 compute-0 ceph-mon[75222]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:47 compute-0 sudo[132552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygpkuppbtkbfambtymcsnvpoexnuulkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296707.081268-363-195296767689808/AnsiballZ_file.py'
Dec 09 16:11:47 compute-0 sudo[132552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:47 compute-0 python3.9[132554]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:11:47 compute-0 sudo[132552]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:48 compute-0 sudo[132704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-karwgtpnayutoshdjfkyigtdbiwgvaea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296707.7805502-371-250158744993802/AnsiballZ_stat.py'
Dec 09 16:11:48 compute-0 sudo[132704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:48 compute-0 python3.9[132706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:48 compute-0 sudo[132704]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:48 compute-0 sudo[132827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rikeidoslrkmasketpcsjuphgtqtkgdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296707.7805502-371-250158744993802/AnsiballZ_copy.py'
Dec 09 16:11:48 compute-0 sudo[132827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:48 compute-0 sudo[132830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:11:48 compute-0 sudo[132830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:48 compute-0 sudo[132830]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:48 compute-0 python3.9[132829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296707.7805502-371-250158744993802/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=00e758ddc55a34ae8ccf237d115f45aaa7d998db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:48 compute-0 sudo[132855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:11:48 compute-0 sudo[132855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:48 compute-0 sudo[132827]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:49 compute-0 sshd-session[126724]: Connection closed by 192.168.122.30 port 53620
Dec 09 16:11:49 compute-0 sshd-session[126721]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:11:49 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 09 16:11:49 compute-0 systemd[1]: session-44.scope: Consumed 23.792s CPU time.
Dec 09 16:11:49 compute-0 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Dec 09 16:11:49 compute-0 systemd-logind[786]: Removed session 44.
Dec 09 16:11:49 compute-0 sudo[132855]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:49 compute-0 ceph-mon[75222]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:11:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:11:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:11:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:11:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:11:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:11:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:11:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:11:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:11:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:11:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:11:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:11:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:49 compute-0 sudo[132937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:11:49 compute-0 sudo[132937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:49 compute-0 sudo[132937]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:49 compute-0 sudo[132962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:11:49 compute-0 sudo[132962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:49 compute-0 podman[132998]: 2025-12-09 16:11:49.78488694 +0000 UTC m=+0.046297817 container create 213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noyce, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:11:49 compute-0 systemd[1]: Started libpod-conmon-213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5.scope.
Dec 09 16:11:49 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:11:49 compute-0 podman[132998]: 2025-12-09 16:11:49.764558714 +0000 UTC m=+0.025969621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:11:49 compute-0 podman[132998]: 2025-12-09 16:11:49.876436053 +0000 UTC m=+0.137846960 container init 213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:11:49 compute-0 podman[132998]: 2025-12-09 16:11:49.884341818 +0000 UTC m=+0.145752705 container start 213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noyce, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:11:49 compute-0 podman[132998]: 2025-12-09 16:11:49.888551383 +0000 UTC m=+0.149962290 container attach 213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:11:49 compute-0 tender_noyce[133014]: 167 167
Dec 09 16:11:49 compute-0 systemd[1]: libpod-213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5.scope: Deactivated successfully.
Dec 09 16:11:49 compute-0 podman[133019]: 2025-12-09 16:11:49.935949559 +0000 UTC m=+0.032670844 container died 213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:11:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3830e9dfa2d83d677dbafa5470031544768e4a58460b78cd628812b2db42820-merged.mount: Deactivated successfully.
Dec 09 16:11:49 compute-0 podman[133019]: 2025-12-09 16:11:49.977805694 +0000 UTC m=+0.074526959 container remove 213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noyce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:11:49 compute-0 systemd[1]: libpod-conmon-213da3adae1f2954a88fe354613b05de3cc785294e44966c0cbdad42097903e5.scope: Deactivated successfully.
Dec 09 16:11:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:50 compute-0 podman[133041]: 2025-12-09 16:11:50.263612168 +0000 UTC m=+0.069164402 container create 17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:11:50 compute-0 systemd[1]: Started libpod-conmon-17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f.scope.
Dec 09 16:11:50 compute-0 podman[133041]: 2025-12-09 16:11:50.234964095 +0000 UTC m=+0.040516389 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:11:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a2e9f770543c1988a7f14e7c721af9509871a250a21a67ec3e9199fcbd492e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a2e9f770543c1988a7f14e7c721af9509871a250a21a67ec3e9199fcbd492e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a2e9f770543c1988a7f14e7c721af9509871a250a21a67ec3e9199fcbd492e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a2e9f770543c1988a7f14e7c721af9509871a250a21a67ec3e9199fcbd492e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a2e9f770543c1988a7f14e7c721af9509871a250a21a67ec3e9199fcbd492e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:11:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:11:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:11:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:11:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:11:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:11:50 compute-0 podman[133041]: 2025-12-09 16:11:50.364171078 +0000 UTC m=+0.169723322 container init 17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:11:50 compute-0 podman[133041]: 2025-12-09 16:11:50.37704431 +0000 UTC m=+0.182596564 container start 17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:11:50 compute-0 podman[133041]: 2025-12-09 16:11:50.380973357 +0000 UTC m=+0.186525611 container attach 17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 16:11:50 compute-0 hopeful_babbage[133057]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:11:50 compute-0 hopeful_babbage[133057]: --> All data devices are unavailable
Dec 09 16:11:50 compute-0 systemd[1]: libpod-17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f.scope: Deactivated successfully.
Dec 09 16:11:50 compute-0 podman[133077]: 2025-12-09 16:11:50.90663589 +0000 UTC m=+0.023761281 container died 17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_babbage, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:11:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9a2e9f770543c1988a7f14e7c721af9509871a250a21a67ec3e9199fcbd492e-merged.mount: Deactivated successfully.
Dec 09 16:11:50 compute-0 podman[133077]: 2025-12-09 16:11:50.946551261 +0000 UTC m=+0.063676632 container remove 17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:11:50 compute-0 systemd[1]: libpod-conmon-17c2ecec40c403a75622c0ea1818f6fc70f4d6297f63847fb7fbd23baea5f73f.scope: Deactivated successfully.
Dec 09 16:11:51 compute-0 sudo[132962]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:51 compute-0 sudo[133092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:11:51 compute-0 sudo[133092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:51 compute-0 sudo[133092]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:51 compute-0 sudo[133117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:11:51 compute-0 sudo[133117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:51 compute-0 ceph-mon[75222]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:51 compute-0 podman[133153]: 2025-12-09 16:11:51.432648212 +0000 UTC m=+0.038077322 container create 1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:11:51 compute-0 systemd[1]: Started libpod-conmon-1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976.scope.
Dec 09 16:11:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:11:51 compute-0 podman[133153]: 2025-12-09 16:11:51.415215625 +0000 UTC m=+0.020644765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:11:51 compute-0 podman[133153]: 2025-12-09 16:11:51.510758527 +0000 UTC m=+0.116187657 container init 1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:11:51 compute-0 podman[133153]: 2025-12-09 16:11:51.518581961 +0000 UTC m=+0.124011071 container start 1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_montalcini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:11:51 compute-0 festive_montalcini[133169]: 167 167
Dec 09 16:11:51 compute-0 systemd[1]: libpod-1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976.scope: Deactivated successfully.
Dec 09 16:11:51 compute-0 podman[133153]: 2025-12-09 16:11:51.52220115 +0000 UTC m=+0.127630270 container attach 1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:11:51 compute-0 podman[133153]: 2025-12-09 16:11:51.523128846 +0000 UTC m=+0.128557976 container died 1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 09 16:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c40602a14816bc12a2b14040a051575a719e5513472d352e52961e2417c0b361-merged.mount: Deactivated successfully.
Dec 09 16:11:51 compute-0 podman[133153]: 2025-12-09 16:11:51.561191746 +0000 UTC m=+0.166620866 container remove 1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_montalcini, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 09 16:11:51 compute-0 systemd[1]: libpod-conmon-1eebce83bc90b670b24a17628baadb84cf509dcfb97bd895cd7d8e5ad1ce3976.scope: Deactivated successfully.
Dec 09 16:11:51 compute-0 podman[133192]: 2025-12-09 16:11:51.741281846 +0000 UTC m=+0.052240516 container create 7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:11:51 compute-0 systemd[1]: Started libpod-conmon-7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730.scope.
Dec 09 16:11:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680b9b8946e3720d7d187516008848483fe853be1a3fac94d506b9e0309ee6b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:51 compute-0 podman[133192]: 2025-12-09 16:11:51.720233366 +0000 UTC m=+0.031192096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680b9b8946e3720d7d187516008848483fe853be1a3fac94d506b9e0309ee6b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680b9b8946e3720d7d187516008848483fe853be1a3fac94d506b9e0309ee6b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680b9b8946e3720d7d187516008848483fe853be1a3fac94d506b9e0309ee6b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:51 compute-0 podman[133192]: 2025-12-09 16:11:51.829959048 +0000 UTC m=+0.140917738 container init 7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:11:51 compute-0 podman[133192]: 2025-12-09 16:11:51.838919691 +0000 UTC m=+0.149878371 container start 7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:11:51 compute-0 podman[133192]: 2025-12-09 16:11:51.842307223 +0000 UTC m=+0.153265943 container attach 7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:11:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:52 compute-0 sweet_shannon[133208]: {
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:     "0": [
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:         {
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "devices": [
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "/dev/loop3"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             ],
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_name": "ceph_lv0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_size": "21470642176",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "name": "ceph_lv0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "tags": {
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cluster_name": "ceph",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.crush_device_class": "",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.encrypted": "0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.objectstore": "bluestore",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osd_id": "0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.type": "block",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.vdo": "0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.with_tpm": "0"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             },
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "type": "block",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "vg_name": "ceph_vg0"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:         }
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:     ],
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:     "1": [
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:         {
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "devices": [
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "/dev/loop4"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             ],
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_name": "ceph_lv1",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_size": "21470642176",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "name": "ceph_lv1",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "tags": {
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cluster_name": "ceph",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.crush_device_class": "",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.encrypted": "0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.objectstore": "bluestore",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osd_id": "1",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.type": "block",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.vdo": "0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.with_tpm": "0"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             },
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "type": "block",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "vg_name": "ceph_vg1"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:         }
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:     ],
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:     "2": [
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:         {
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "devices": [
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "/dev/loop5"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             ],
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_name": "ceph_lv2",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_size": "21470642176",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "name": "ceph_lv2",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "tags": {
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.cluster_name": "ceph",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.crush_device_class": "",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.encrypted": "0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.objectstore": "bluestore",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osd_id": "2",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.type": "block",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.vdo": "0",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:                 "ceph.with_tpm": "0"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             },
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "type": "block",
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:             "vg_name": "ceph_vg2"
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:         }
Dec 09 16:11:52 compute-0 sweet_shannon[133208]:     ]
Dec 09 16:11:52 compute-0 sweet_shannon[133208]: }
Dec 09 16:11:52 compute-0 systemd[1]: libpod-7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730.scope: Deactivated successfully.
Dec 09 16:11:52 compute-0 podman[133192]: 2025-12-09 16:11:52.151169119 +0000 UTC m=+0.462127799 container died 7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 09 16:11:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-680b9b8946e3720d7d187516008848483fe853be1a3fac94d506b9e0309ee6b3-merged.mount: Deactivated successfully.
Dec 09 16:11:52 compute-0 podman[133192]: 2025-12-09 16:11:52.198865381 +0000 UTC m=+0.509824051 container remove 7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:11:52 compute-0 systemd[1]: libpod-conmon-7e702b45f2cadd5dc47ea4030e746f7a69910ec51b7478b280fe391c738b4730.scope: Deactivated successfully.
Dec 09 16:11:52 compute-0 sudo[133117]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:52 compute-0 sudo[133232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:11:52 compute-0 sudo[133232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:52 compute-0 sudo[133232]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:52 compute-0 sudo[133257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:11:52 compute-0 sudo[133257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:52 compute-0 podman[133294]: 2025-12-09 16:11:52.685677438 +0000 UTC m=+0.054588200 container create 190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_napier, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:11:52 compute-0 systemd[1]: Started libpod-conmon-190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf.scope.
Dec 09 16:11:52 compute-0 podman[133294]: 2025-12-09 16:11:52.655881441 +0000 UTC m=+0.024792213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:11:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:11:52 compute-0 podman[133294]: 2025-12-09 16:11:52.785688307 +0000 UTC m=+0.154599109 container init 190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:11:52 compute-0 podman[133294]: 2025-12-09 16:11:52.796265354 +0000 UTC m=+0.165176106 container start 190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_napier, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:11:52 compute-0 practical_napier[133310]: 167 167
Dec 09 16:11:52 compute-0 systemd[1]: libpod-190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf.scope: Deactivated successfully.
Dec 09 16:11:52 compute-0 podman[133294]: 2025-12-09 16:11:52.801398003 +0000 UTC m=+0.170308815 container attach 190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:11:52 compute-0 podman[133294]: 2025-12-09 16:11:52.802607675 +0000 UTC m=+0.171518427 container died 190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_napier, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:11:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0feb1bcb4bed17facbdd13f2515f5a0838a8c61fd4ef8d8903f861db801f4267-merged.mount: Deactivated successfully.
Dec 09 16:11:52 compute-0 podman[133294]: 2025-12-09 16:11:52.84596957 +0000 UTC m=+0.214880332 container remove 190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:11:52 compute-0 systemd[1]: libpod-conmon-190328fd2f2e381fd483f7410f056836478300684eebaa03b305b4dffab186cf.scope: Deactivated successfully.
Dec 09 16:11:53 compute-0 podman[133337]: 2025-12-09 16:11:53.047128849 +0000 UTC m=+0.055237377 container create bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:11:53 compute-0 systemd[1]: Started libpod-conmon-bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35.scope.
Dec 09 16:11:53 compute-0 podman[133337]: 2025-12-09 16:11:53.01798797 +0000 UTC m=+0.026096568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:11:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf937ee01a0acfa2984651887031d1548467be7d55bd453ff7aaf0acb0e5093/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf937ee01a0acfa2984651887031d1548467be7d55bd453ff7aaf0acb0e5093/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf937ee01a0acfa2984651887031d1548467be7d55bd453ff7aaf0acb0e5093/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf937ee01a0acfa2984651887031d1548467be7d55bd453ff7aaf0acb0e5093/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:11:53 compute-0 podman[133337]: 2025-12-09 16:11:53.142581325 +0000 UTC m=+0.150689873 container init bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:11:53 compute-0 podman[133337]: 2025-12-09 16:11:53.153565782 +0000 UTC m=+0.161674330 container start bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:11:53 compute-0 podman[133337]: 2025-12-09 16:11:53.15863942 +0000 UTC m=+0.166747968 container attach bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:11:53 compute-0 ceph-mon[75222]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:53 compute-0 lvm[133433]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:11:53 compute-0 lvm[133432]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:11:53 compute-0 lvm[133433]: VG ceph_vg1 finished
Dec 09 16:11:53 compute-0 lvm[133432]: VG ceph_vg0 finished
Dec 09 16:11:53 compute-0 lvm[133435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:11:53 compute-0 lvm[133435]: VG ceph_vg2 finished
Dec 09 16:11:53 compute-0 nifty_vaughan[133354]: {}
Dec 09 16:11:53 compute-0 systemd[1]: libpod-bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35.scope: Deactivated successfully.
Dec 09 16:11:53 compute-0 podman[133337]: 2025-12-09 16:11:53.936612342 +0000 UTC m=+0.944720860 container died bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:11:53 compute-0 systemd[1]: libpod-bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35.scope: Consumed 1.243s CPU time.
Dec 09 16:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf937ee01a0acfa2984651887031d1548467be7d55bd453ff7aaf0acb0e5093-merged.mount: Deactivated successfully.
Dec 09 16:11:53 compute-0 podman[133337]: 2025-12-09 16:11:53.978708493 +0000 UTC m=+0.986817011 container remove bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:11:53 compute-0 systemd[1]: libpod-conmon-bc5f1f9845343d69f251ba0cce1c5e9196bf278040e3261e97431800c2bc9f35.scope: Deactivated successfully.
Dec 09 16:11:54 compute-0 sudo[133257]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:11:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:11:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:11:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:11:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:54 compute-0 sudo[133450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:11:54 compute-0 sudo[133450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:11:54 compute-0 sudo[133450]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:11:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:11:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:11:55 compute-0 ceph-mon[75222]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:55 compute-0 sshd-session[133475]: Accepted publickey for zuul from 192.168.122.30 port 33388 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:11:55 compute-0 systemd-logind[786]: New session 45 of user zuul.
Dec 09 16:11:55 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 09 16:11:55 compute-0 sshd-session[133475]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:11:55 compute-0 sudo[133628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowqdoncwdijrqgbfwsaovsmrfewacgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296715.3385723-22-167521400913307/AnsiballZ_file.py'
Dec 09 16:11:55 compute-0 sudo[133628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:56 compute-0 python3.9[133630]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:56 compute-0 sudo[133628]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:11:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:11:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:11:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:11:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:11:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:11:56 compute-0 sudo[133780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkvfuvtzsjetezjyhwhukaqksiasfsqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296716.1845572-34-207683757274046/AnsiballZ_stat.py'
Dec 09 16:11:56 compute-0 sudo[133780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:56 compute-0 python3.9[133782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:56 compute-0 sudo[133780]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:57 compute-0 ceph-mon[75222]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:57 compute-0 sudo[133903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfstfxlpkzoiiwghkyfpodvozjmuukui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296716.1845572-34-207683757274046/AnsiballZ_copy.py'
Dec 09 16:11:57 compute-0 sudo[133903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:57 compute-0 python3.9[133905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296716.1845572-34-207683757274046/.source.conf _original_basename=ceph.conf follow=False checksum=5f78fb4578373953ef21d2b1ae110fa14721cd08 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:57 compute-0 sudo[133903]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:57 compute-0 sudo[134055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajpyorurruhlzibmgpowufcpxguzyczt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296717.693435-34-259960667170660/AnsiballZ_stat.py'
Dec 09 16:11:57 compute-0 sudo[134055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:58 compute-0 python3.9[134057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:11:58 compute-0 sudo[134055]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:58 compute-0 sudo[134178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdktcqlexonqclculsapoccqasgijngu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296717.693435-34-259960667170660/AnsiballZ_copy.py'
Dec 09 16:11:58 compute-0 sudo[134178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:11:58 compute-0 python3.9[134180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296717.693435-34-259960667170660/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=32b98c53b5e04ff0e2611789fc8ffea1fdf91cb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:11:58 compute-0 sudo[134178]: pam_unix(sudo:session): session closed for user root
Dec 09 16:11:59 compute-0 sshd-session[133478]: Connection closed by 192.168.122.30 port 33388
Dec 09 16:11:59 compute-0 sshd-session[133475]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:11:59 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 09 16:11:59 compute-0 systemd[1]: session-45.scope: Consumed 2.645s CPU time.
Dec 09 16:11:59 compute-0 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Dec 09 16:11:59 compute-0 systemd-logind[786]: Removed session 45.
Dec 09 16:11:59 compute-0 ceph-mon[75222]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:11:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:01 compute-0 ceph-mon[75222]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:03 compute-0 ceph-mon[75222]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:05 compute-0 ceph-mon[75222]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:05 compute-0 sshd-session[134205]: Accepted publickey for zuul from 192.168.122.30 port 44806 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:12:05 compute-0 systemd-logind[786]: New session 46 of user zuul.
Dec 09 16:12:05 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 09 16:12:05 compute-0 sshd-session[134205]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:12:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:06 compute-0 python3.9[134358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:12:07 compute-0 sudo[134512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnuogoxhoiyegrgjkpcdeurmfpmytplv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296726.7110393-34-251312872859328/AnsiballZ_file.py'
Dec 09 16:12:07 compute-0 sudo[134512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:07 compute-0 ceph-mon[75222]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:07 compute-0 python3.9[134514]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:12:07 compute-0 sudo[134512]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:07 compute-0 sudo[134664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njabffvsfaimpkwmoiavrgxibcjbpcvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296727.4250786-34-110289327030893/AnsiballZ_file.py'
Dec 09 16:12:07 compute-0 sudo[134664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:07 compute-0 python3.9[134666]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:12:07 compute-0 sudo[134664]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:08 compute-0 python3.9[134816]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:12:09 compute-0 ceph-mon[75222]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:09 compute-0 sudo[134966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipomovzzrrtphdfeudqmhfrimvlvortg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296729.0021026-57-25410197291652/AnsiballZ_seboolean.py'
Dec 09 16:12:09 compute-0 sudo[134966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:09 compute-0 python3.9[134968]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 09 16:12:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:10 compute-0 sudo[134966]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:11 compute-0 ceph-mon[75222]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:11 compute-0 sudo[135122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltjxpknjcqkeinoosvwlrtovkrbspuni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296731.2526104-67-204486434445386/AnsiballZ_setup.py'
Dec 09 16:12:11 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 09 16:12:11 compute-0 sudo[135122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:11 compute-0 python3.9[135124]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:12:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:12 compute-0 sudo[135122]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:12 compute-0 sudo[135206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iomfytxxxodmsuueniocacoruhaaqtkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296731.2526104-67-204486434445386/AnsiballZ_dnf.py'
Dec 09 16:12:12 compute-0 sudo[135206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:12 compute-0 python3.9[135208]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:12:13 compute-0 ceph-mon[75222]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:14 compute-0 sudo[135206]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.419190) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296734419250, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1677, "num_deletes": 250, "total_data_size": 2532464, "memory_usage": 2565256, "flush_reason": "Manual Compaction"}
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296734433672, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1474208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7378, "largest_seqno": 9054, "table_properties": {"data_size": 1468685, "index_size": 2535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15382, "raw_average_key_size": 20, "raw_value_size": 1455870, "raw_average_value_size": 1935, "num_data_blocks": 120, "num_entries": 752, "num_filter_entries": 752, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296572, "oldest_key_time": 1765296572, "file_creation_time": 1765296734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 14580 microseconds, and 8106 cpu microseconds.
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.433769) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1474208 bytes OK
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.433799) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.435218) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.435245) EVENT_LOG_v1 {"time_micros": 1765296734435237, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.435270) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2525083, prev total WAL file size 2525083, number of live WAL files 2.
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.436938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1439KB)], [20(7464KB)]
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296734436993, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9118172, "oldest_snapshot_seqno": -1}
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3409 keys, 7171018 bytes, temperature: kUnknown
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296734499633, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7171018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7144423, "index_size": 16949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81556, "raw_average_key_size": 23, "raw_value_size": 7079070, "raw_average_value_size": 2076, "num_data_blocks": 751, "num_entries": 3409, "num_filter_entries": 3409, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765296734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.499981) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7171018 bytes
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.501266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.3 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.3 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.0) write-amplify(4.9) OK, records in: 3845, records dropped: 436 output_compression: NoCompression
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.501291) EVENT_LOG_v1 {"time_micros": 1765296734501278, "job": 6, "event": "compaction_finished", "compaction_time_micros": 62760, "compaction_time_cpu_micros": 33149, "output_level": 6, "num_output_files": 1, "total_output_size": 7171018, "num_input_records": 3845, "num_output_records": 3409, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296734501673, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296734503293, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.436812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.503407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.503415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.503417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.503419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:12:14 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:12:14.503421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:12:15 compute-0 sudo[135359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hozlgeskpmcqcxgfoneazubqzldkbwxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296734.4467576-79-99634179914381/AnsiballZ_systemd.py'
Dec 09 16:12:15 compute-0 sudo[135359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:15 compute-0 python3.9[135361]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 16:12:15 compute-0 ceph-mon[75222]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:15 compute-0 sudo[135359]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:16 compute-0 sudo[135514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtxfrpkpefwuutpbixijyxhrckzqreck ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765296735.7138984-87-22223895685099/AnsiballZ_edpm_nftables_snippet.py'
Dec 09 16:12:16 compute-0 sudo[135514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:16 compute-0 python3[135516]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 09 16:12:16 compute-0 sudo[135514]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:16 compute-0 sudo[135666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tphzqmxctsvmrurkvprcxfodjwevesfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296736.7037594-96-34137990257288/AnsiballZ_file.py'
Dec 09 16:12:16 compute-0 sudo[135666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:17 compute-0 python3.9[135668]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:17 compute-0 sudo[135666]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:17 compute-0 ceph-mon[75222]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:17 compute-0 sudo[135818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiecpqrzkwbhnauaadhmrlcpbwnrstln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296737.4592257-104-257508799561826/AnsiballZ_stat.py'
Dec 09 16:12:17 compute-0 sudo[135818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:18 compute-0 python3.9[135820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:18 compute-0 sudo[135818]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:18 compute-0 sudo[135896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eregfqpdcfcgrxazagjhxqgqkzukkcce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296737.4592257-104-257508799561826/AnsiballZ_file.py'
Dec 09 16:12:18 compute-0 sudo[135896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:18 compute-0 python3.9[135898]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:18 compute-0 sudo[135896]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:19 compute-0 sudo[136048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzymfitluqgwzrxtklhdbeikhhikilxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296738.7666998-116-185111097277762/AnsiballZ_stat.py'
Dec 09 16:12:19 compute-0 sudo[136048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:19 compute-0 python3.9[136050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:19 compute-0 sudo[136048]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:19 compute-0 sudo[136126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyqrezzuamcutbnyhtsuzuftqcsnrceh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296738.7666998-116-185111097277762/AnsiballZ_file.py'
Dec 09 16:12:19 compute-0 sudo[136126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:19 compute-0 ceph-mon[75222]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:19 compute-0 python3.9[136128]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.e6m20anb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:19 compute-0 sudo[136126]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:20 compute-0 sudo[136278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkexvwfeihbfcttniyvoeplorxvtzhnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296739.7662284-128-65460142187907/AnsiballZ_stat.py'
Dec 09 16:12:20 compute-0 sudo[136278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:20 compute-0 python3.9[136280]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:20 compute-0 sudo[136278]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:20 compute-0 sudo[136356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pucfscetebvompzipshyorgdfrvahbed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296739.7662284-128-65460142187907/AnsiballZ_file.py'
Dec 09 16:12:20 compute-0 sudo[136356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:20 compute-0 python3.9[136358]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:20 compute-0 sudo[136356]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:21 compute-0 sudo[136508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plkjoyowdliiunuqbmtyhhoerisiembc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296740.9726884-141-209305164634540/AnsiballZ_command.py'
Dec 09 16:12:21 compute-0 sudo[136508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:21 compute-0 ceph-mon[75222]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:21 compute-0 python3.9[136510]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:12:21 compute-0 sudo[136508]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:22 compute-0 sudo[136661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idoqrsjolyrerovuhdseqttcxspkbhoi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765296741.8397198-149-24731287041163/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 16:12:22 compute-0 sudo[136661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:22 compute-0 python3[136663]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 16:12:22 compute-0 sudo[136661]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:22 compute-0 sshd-session[136664]: Invalid user dspace from 146.190.31.45 port 53588
Dec 09 16:12:22 compute-0 sshd-session[136664]: Connection closed by invalid user dspace 146.190.31.45 port 53588 [preauth]
Dec 09 16:12:23 compute-0 sudo[136815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjwvoiffcvmfbjnwswhlbwecyyikocp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296742.7220314-157-54572361924651/AnsiballZ_stat.py'
Dec 09 16:12:23 compute-0 sudo[136815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:23 compute-0 python3.9[136817]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:23 compute-0 sudo[136815]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:23 compute-0 ceph-mon[75222]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:23 compute-0 sudo[136940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcfhddptjpqpjdcofqiculicthfpntvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296742.7220314-157-54572361924651/AnsiballZ_copy.py'
Dec 09 16:12:23 compute-0 sudo[136940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:23 compute-0 python3.9[136942]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296742.7220314-157-54572361924651/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:23 compute-0 sudo[136940]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:24 compute-0 sudo[137092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soegoakzdjejxblgeoydhioujzlatrbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296744.1618955-172-187839457746048/AnsiballZ_stat.py'
Dec 09 16:12:24 compute-0 sudo[137092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:24 compute-0 python3.9[137094]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:24 compute-0 sudo[137092]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:25 compute-0 sudo[137217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukdtbwplqzcjytmfrunivxlcoaanqovz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296744.1618955-172-187839457746048/AnsiballZ_copy.py'
Dec 09 16:12:25 compute-0 sudo[137217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:25 compute-0 python3.9[137219]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296744.1618955-172-187839457746048/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:25 compute-0 sudo[137217]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:25 compute-0 ceph-mon[75222]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:25 compute-0 sudo[137369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muwxjzuzahhymlygkqdcoxckgwegvguh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296745.4688215-187-62814209308286/AnsiballZ_stat.py'
Dec 09 16:12:25 compute-0 sudo[137369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:12:25
Dec 09 16:12:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:12:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:12:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes', 'vms']
Dec 09 16:12:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:12:26 compute-0 python3.9[137371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:26 compute-0 sudo[137369]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:26 compute-0 sudo[137494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmyjfnjhleeqltwggclskdppyyfgvswn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296745.4688215-187-62814209308286/AnsiballZ_copy.py'
Dec 09 16:12:26 compute-0 sudo[137494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:12:26 compute-0 python3.9[137496]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296745.4688215-187-62814209308286/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:26 compute-0 sudo[137494]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:12:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:12:27 compute-0 sudo[137646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzuxkjotbaqmtqdtjihwmcbjnvwdxthz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296746.8066912-202-158363769789318/AnsiballZ_stat.py'
Dec 09 16:12:27 compute-0 sudo[137646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:27 compute-0 python3.9[137648]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:27 compute-0 sudo[137646]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:27 compute-0 ceph-mon[75222]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:27 compute-0 sudo[137771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezxqtbsbhdpgrerormxosgabrvzvjygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296746.8066912-202-158363769789318/AnsiballZ_copy.py'
Dec 09 16:12:27 compute-0 sudo[137771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:27 compute-0 python3.9[137773]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296746.8066912-202-158363769789318/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:27 compute-0 sudo[137771]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:28 compute-0 sudo[137923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvdchjbwvrxjfcbkuidzkpjcrshlekss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296748.1188412-217-156314625031124/AnsiballZ_stat.py'
Dec 09 16:12:28 compute-0 sudo[137923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:28 compute-0 python3.9[137925]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:28 compute-0 sudo[137923]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:29 compute-0 sudo[138048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrevqpiwkbljhywyhjfmmoczflitpibk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296748.1188412-217-156314625031124/AnsiballZ_copy.py'
Dec 09 16:12:29 compute-0 sudo[138048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:29 compute-0 python3.9[138050]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765296748.1188412-217-156314625031124/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:29 compute-0 sudo[138048]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:29 compute-0 ceph-mon[75222]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:29 compute-0 sudo[138200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqvnyxndxqfvyyfnstdqtqusqapmexif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296749.4254699-232-74876108072145/AnsiballZ_file.py'
Dec 09 16:12:29 compute-0 sudo[138200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:29 compute-0 python3.9[138202]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:29 compute-0 sudo[138200]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:30 compute-0 sudo[138352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynsrusqmxssmzvekmlvlstjsdxlelrqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296750.0916817-240-182593651898887/AnsiballZ_command.py'
Dec 09 16:12:30 compute-0 sudo[138352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:30 compute-0 python3.9[138354]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:12:30 compute-0 sudo[138352]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:31 compute-0 sudo[138507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odmbecjmjnkuhrftjopcmbrogfhfzfcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296750.7957387-248-212807651048432/AnsiballZ_blockinfile.py'
Dec 09 16:12:31 compute-0 sudo[138507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:31 compute-0 python3.9[138509]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:31 compute-0 sudo[138507]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:31 compute-0 ceph-mon[75222]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:31 compute-0 sudo[138659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsvwohmdnbmvgnhwbdtmmsdgieiqesow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296751.6440876-257-66399583151088/AnsiballZ_command.py'
Dec 09 16:12:31 compute-0 sudo[138659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:32 compute-0 python3.9[138661]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:12:32 compute-0 sudo[138659]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:32 compute-0 sudo[138812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fajiurhrafyarynqtjznixwgzofiblrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296752.3045578-265-78335278860223/AnsiballZ_stat.py'
Dec 09 16:12:32 compute-0 sudo[138812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:32 compute-0 python3.9[138814]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:12:32 compute-0 sudo[138812]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:33 compute-0 sudo[138966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agvheyopcmilvdyumbpfwkhjzbodurwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296752.9189038-273-188216433662109/AnsiballZ_command.py'
Dec 09 16:12:33 compute-0 sudo[138966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:33 compute-0 python3.9[138968]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:12:33 compute-0 sudo[138966]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:33 compute-0 ceph-mon[75222]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:33 compute-0 sudo[139121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzmfdgqozfudpwxwumrldeeqzvyqccjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296753.5756214-281-70295786716999/AnsiballZ_file.py'
Dec 09 16:12:33 compute-0 sudo[139121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:34 compute-0 python3.9[139123]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:34 compute-0 sudo[139121]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:35 compute-0 python3.9[139273]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:12:35 compute-0 ceph-mon[75222]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:36 compute-0 sudo[139424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvvnrcjizucjlshiadxldwzqilakpbez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296755.7748027-321-68979781262924/AnsiballZ_command.py'
Dec 09 16:12:36 compute-0 sudo[139424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:36 compute-0 python3.9[139426]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:12:36 compute-0 ovs-vsctl[139427]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 09 16:12:36 compute-0 sudo[139424]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:12:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:12:36 compute-0 sudo[139577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xticcnrvuqvjgkixuzucwyhcyptaeuzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296756.4761999-330-208157969514414/AnsiballZ_command.py'
Dec 09 16:12:36 compute-0 sudo[139577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:36 compute-0 python3.9[139579]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:12:36 compute-0 sudo[139577]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:37 compute-0 sudo[139732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeyphugqanvxktcmmcalqmjkyzwzaoin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296757.1025374-338-155338074985418/AnsiballZ_command.py'
Dec 09 16:12:37 compute-0 sudo[139732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:37 compute-0 python3.9[139734]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:12:37 compute-0 ovs-vsctl[139735]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 09 16:12:37 compute-0 sudo[139732]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:37 compute-0 ceph-mon[75222]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:38 compute-0 python3.9[139885]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:12:38 compute-0 sudo[140037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqpcbcneyysxknbnaweorweqmsavbdre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296758.6356678-355-275198657164426/AnsiballZ_file.py'
Dec 09 16:12:38 compute-0 sudo[140037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:39 compute-0 python3.9[140039]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:12:39 compute-0 sudo[140037]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:39 compute-0 sudo[140189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auggusxehrbjgzyhethuawcypketmbmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296759.31193-363-40359597534960/AnsiballZ_stat.py'
Dec 09 16:12:39 compute-0 sudo[140189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:39 compute-0 ceph-mon[75222]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:39 compute-0 python3.9[140191]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:39 compute-0 sudo[140189]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:40 compute-0 sudo[140267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzbeloqwrdredyhhvzxwlbgmxfcjhme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296759.31193-363-40359597534960/AnsiballZ_file.py'
Dec 09 16:12:40 compute-0 sudo[140267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:40 compute-0 python3.9[140269]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:12:40 compute-0 sudo[140267]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:40 compute-0 sudo[140419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrorzmpktazmrpxbdyyctpvrdnqcxvzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296760.4070199-363-48221625067170/AnsiballZ_stat.py'
Dec 09 16:12:40 compute-0 sudo[140419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:40 compute-0 python3.9[140421]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:40 compute-0 sudo[140419]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:41 compute-0 sudo[140497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvxzgyznggekupjwnfggdedxsqocggjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296760.4070199-363-48221625067170/AnsiballZ_file.py'
Dec 09 16:12:41 compute-0 sudo[140497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:41 compute-0 python3.9[140499]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:12:41 compute-0 sudo[140497]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:41 compute-0 ceph-mon[75222]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:41 compute-0 sudo[140649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iajtxgtgpcpumphpkngxdtdryysgrksr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296761.5533624-386-25300042563694/AnsiballZ_file.py'
Dec 09 16:12:41 compute-0 sudo[140649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:42 compute-0 python3.9[140651]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:42 compute-0 sudo[140649]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:42 compute-0 sudo[140801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthypwrymeietpydxvzjpxxbrocpozcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296762.2290475-394-122900002937185/AnsiballZ_stat.py'
Dec 09 16:12:42 compute-0 sudo[140801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:42 compute-0 python3.9[140803]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:42 compute-0 sudo[140801]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:42 compute-0 sudo[140879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvqyqtkzlxqlkijuvcniyhtcxqkcvxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296762.2290475-394-122900002937185/AnsiballZ_file.py'
Dec 09 16:12:42 compute-0 sudo[140879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:43 compute-0 python3.9[140881]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:43 compute-0 sudo[140879]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:43 compute-0 sudo[141031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xytjhedizzuhojnhlbcbwspelxobkjhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296763.297971-406-152304215474423/AnsiballZ_stat.py'
Dec 09 16:12:43 compute-0 sudo[141031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:43 compute-0 ceph-mon[75222]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:43 compute-0 python3.9[141033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:43 compute-0 sudo[141031]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:44 compute-0 sudo[141109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqdcnukgttxwcbtltdzrewznrxqmqydv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296763.297971-406-152304215474423/AnsiballZ_file.py'
Dec 09 16:12:44 compute-0 sudo[141109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:44 compute-0 python3.9[141111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:44 compute-0 sudo[141109]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:44 compute-0 sudo[141261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dojoxrymhtgwshfsudjbpdxbxjiqnwol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296764.4319134-418-208470042024902/AnsiballZ_systemd.py'
Dec 09 16:12:44 compute-0 sudo[141261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:44 compute-0 ceph-mon[75222]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:45 compute-0 python3.9[141263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:12:45 compute-0 systemd[1]: Reloading.
Dec 09 16:12:45 compute-0 systemd-sysv-generator[141296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:12:45 compute-0 systemd-rc-local-generator[141292]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:12:45 compute-0 sudo[141261]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:46 compute-0 sudo[141450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmobieluyfcvqqrflzicfuowpvtlatyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296765.5426195-426-98269477087273/AnsiballZ_stat.py'
Dec 09 16:12:46 compute-0 sudo[141450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:46 compute-0 python3.9[141452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:46 compute-0 sudo[141450]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:46 compute-0 sudo[141528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pukbiwanjxsqdtpwaxxwnguopftpspuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296765.5426195-426-98269477087273/AnsiballZ_file.py'
Dec 09 16:12:46 compute-0 sudo[141528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:46 compute-0 python3.9[141530]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:46 compute-0 sudo[141528]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:47 compute-0 ceph-mon[75222]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:47 compute-0 sudo[141680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqrgqqtggaymyirnowhxyksgnavibbow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296766.9777205-438-152551877863900/AnsiballZ_stat.py'
Dec 09 16:12:47 compute-0 sudo[141680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:47 compute-0 python3.9[141682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:47 compute-0 sudo[141680]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:47 compute-0 sudo[141758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwwyhcniumsofqvskeaymwgaxqpxruux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296766.9777205-438-152551877863900/AnsiballZ_file.py'
Dec 09 16:12:47 compute-0 sudo[141758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:47 compute-0 python3.9[141760]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:47 compute-0 sudo[141758]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:48 compute-0 sudo[141910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvwpmibfigrogcvqadbdtvovhwbvnbqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296768.0804696-450-125212227902492/AnsiballZ_systemd.py'
Dec 09 16:12:48 compute-0 sudo[141910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:48 compute-0 python3.9[141912]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:12:48 compute-0 systemd[1]: Reloading.
Dec 09 16:12:48 compute-0 systemd-rc-local-generator[141936]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:12:48 compute-0 systemd-sysv-generator[141942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:12:48 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 16:12:49 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 16:12:49 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 16:12:49 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 16:12:49 compute-0 sudo[141910]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:49 compute-0 ceph-mon[75222]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:49 compute-0 sudo[142103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooqfvetnlalpanxmwovaxhrszxpllzoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296769.3043919-460-260166513568208/AnsiballZ_file.py'
Dec 09 16:12:49 compute-0 sudo[142103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:49 compute-0 python3.9[142105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:12:49 compute-0 sudo[142103]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:50 compute-0 sudo[142255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izudnkggqljvywijvdnblyyefjpqtoop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296769.9741597-468-21748054498365/AnsiballZ_stat.py'
Dec 09 16:12:50 compute-0 sudo[142255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:50 compute-0 python3.9[142257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:50 compute-0 sudo[142255]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:50 compute-0 sudo[142378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgokyfofyzrnoqfyqfbeynnhghzsszgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296769.9741597-468-21748054498365/AnsiballZ_copy.py'
Dec 09 16:12:50 compute-0 sudo[142378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:51 compute-0 python3.9[142380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296769.9741597-468-21748054498365/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:12:51 compute-0 sudo[142378]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:51 compute-0 ceph-mon[75222]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:51 compute-0 sudo[142530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xssuwkgyvkghufqxavgvydwjgtpmcslc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296771.4474478-485-216474951332019/AnsiballZ_file.py'
Dec 09 16:12:51 compute-0 sudo[142530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:51 compute-0 python3.9[142532]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:12:51 compute-0 sudo[142530]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:52 compute-0 sudo[142682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqjyyokwmtmquhrtsywavfzodxrjpwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296772.130094-493-46606850118973/AnsiballZ_stat.py'
Dec 09 16:12:52 compute-0 sudo[142682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:52 compute-0 python3.9[142684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:12:52 compute-0 sudo[142682]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:53 compute-0 sudo[142805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbuwnrpbnvwqgilziyisygvmeenogmjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296772.130094-493-46606850118973/AnsiballZ_copy.py'
Dec 09 16:12:53 compute-0 sudo[142805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:53 compute-0 ceph-mon[75222]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:53 compute-0 python3.9[142807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296772.130094-493-46606850118973/.source.json _original_basename=.fwredsvw follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:53 compute-0 sudo[142805]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:53 compute-0 sudo[142957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wveqbspsldfapxyhrpnrlyerxlykstca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296773.4379547-508-14106983988383/AnsiballZ_file.py'
Dec 09 16:12:53 compute-0 sudo[142957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:54 compute-0 python3.9[142959]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:12:54 compute-0 sudo[142957]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:54 compute-0 sudo[142984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:12:54 compute-0 sudo[142984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:54 compute-0 sudo[142984]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:54 compute-0 sudo[143009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:12:54 compute-0 sudo[143009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:12:54 compute-0 sudo[143170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgdvrjqfczjyanexupowaleallhnceu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296774.2134776-516-72133637347483/AnsiballZ_stat.py'
Dec 09 16:12:54 compute-0 sudo[143170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:54 compute-0 sudo[143170]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:54 compute-0 sudo[143009]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:12:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:12:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:12:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:12:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:12:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:12:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:12:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:12:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:12:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:12:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:12:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:12:54 compute-0 sudo[143239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:12:54 compute-0 sudo[143239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:54 compute-0 sudo[143239]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:54 compute-0 sudo[143287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:12:54 compute-0 sudo[143287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:55 compute-0 sudo[143362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwrqwfikxonfizntatntuvenembnwek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296774.2134776-516-72133637347483/AnsiballZ_copy.py'
Dec 09 16:12:55 compute-0 sudo[143362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:55 compute-0 podman[143375]: 2025-12-09 16:12:55.224612314 +0000 UTC m=+0.047649181 container create f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:12:55 compute-0 sudo[143362]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:55 compute-0 systemd[1]: Started libpod-conmon-f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c.scope.
Dec 09 16:12:55 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:12:55 compute-0 podman[143375]: 2025-12-09 16:12:55.208963891 +0000 UTC m=+0.032000788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:12:55 compute-0 podman[143375]: 2025-12-09 16:12:55.30567298 +0000 UTC m=+0.128709867 container init f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_burnell, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:12:55 compute-0 podman[143375]: 2025-12-09 16:12:55.313562914 +0000 UTC m=+0.136599791 container start f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:12:55 compute-0 podman[143375]: 2025-12-09 16:12:55.319657699 +0000 UTC m=+0.142694596 container attach f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:12:55 compute-0 condescending_burnell[143392]: 167 167
Dec 09 16:12:55 compute-0 systemd[1]: libpod-f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c.scope: Deactivated successfully.
Dec 09 16:12:55 compute-0 podman[143375]: 2025-12-09 16:12:55.321692694 +0000 UTC m=+0.144729561 container died f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b1effac8d7d37e37875185908a2d77bb9cbed4ac9673014f98c15249b557c16-merged.mount: Deactivated successfully.
Dec 09 16:12:55 compute-0 podman[143375]: 2025-12-09 16:12:55.363507847 +0000 UTC m=+0.186544744 container remove f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_burnell, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:12:55 compute-0 ceph-mon[75222]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:12:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:12:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:12:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:12:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:12:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:12:55 compute-0 systemd[1]: libpod-conmon-f74a4f0f504a6f41e14315ab610ef6ed6e5954350d30b5cf451614ca58812a4c.scope: Deactivated successfully.
Dec 09 16:12:55 compute-0 podman[143442]: 2025-12-09 16:12:55.524350204 +0000 UTC m=+0.045057062 container create cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_margulis, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:12:55 compute-0 systemd[1]: Started libpod-conmon-cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea.scope.
Dec 09 16:12:55 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f72092fd0b01d23c619b2562d2b7dda07ac5e607d423cb7e576252893aadbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f72092fd0b01d23c619b2562d2b7dda07ac5e607d423cb7e576252893aadbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f72092fd0b01d23c619b2562d2b7dda07ac5e607d423cb7e576252893aadbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f72092fd0b01d23c619b2562d2b7dda07ac5e607d423cb7e576252893aadbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f72092fd0b01d23c619b2562d2b7dda07ac5e607d423cb7e576252893aadbd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:55 compute-0 podman[143442]: 2025-12-09 16:12:55.505358829 +0000 UTC m=+0.026065737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:12:55 compute-0 podman[143442]: 2025-12-09 16:12:55.608151454 +0000 UTC m=+0.128858342 container init cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_margulis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:12:55 compute-0 podman[143442]: 2025-12-09 16:12:55.614241359 +0000 UTC m=+0.134948227 container start cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_margulis, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:12:55 compute-0 podman[143442]: 2025-12-09 16:12:55.617627911 +0000 UTC m=+0.138334799 container attach cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_margulis, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:12:55 compute-0 sudo[143591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyckeitqqreeowawqfqpfdswfygizlut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296775.4982986-533-214240845697725/AnsiballZ_container_config_data.py'
Dec 09 16:12:55 compute-0 sudo[143591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:56 compute-0 youthful_margulis[143506]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:12:56 compute-0 youthful_margulis[143506]: --> All data devices are unavailable
Dec 09 16:12:56 compute-0 systemd[1]: libpod-cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea.scope: Deactivated successfully.
Dec 09 16:12:56 compute-0 podman[143442]: 2025-12-09 16:12:56.14280828 +0000 UTC m=+0.663515168 container died cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 09 16:12:56 compute-0 python3.9[143593]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 09 16:12:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8f72092fd0b01d23c619b2562d2b7dda07ac5e607d423cb7e576252893aadbd-merged.mount: Deactivated successfully.
Dec 09 16:12:56 compute-0 sudo[143591]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:56 compute-0 podman[143442]: 2025-12-09 16:12:56.194108155 +0000 UTC m=+0.714815043 container remove cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:12:56 compute-0 systemd[1]: libpod-conmon-cd65535fd260baee4fc71de147e09fcbcda552492a552e399a129a0e3ce99dea.scope: Deactivated successfully.
Dec 09 16:12:56 compute-0 sudo[143287]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:56 compute-0 sudo[143639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:12:56 compute-0 sudo[143639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:56 compute-0 sudo[143639]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:56 compute-0 sudo[143664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:12:56 compute-0 sudo[143664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:12:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:12:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:12:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:12:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:12:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:12:56 compute-0 podman[143753]: 2025-12-09 16:12:56.638414566 +0000 UTC m=+0.037933685 container create 402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:12:56 compute-0 systemd[1]: Started libpod-conmon-402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1.scope.
Dec 09 16:12:56 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:12:56 compute-0 podman[143753]: 2025-12-09 16:12:56.623705057 +0000 UTC m=+0.023224196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:12:56 compute-0 podman[143753]: 2025-12-09 16:12:56.721976947 +0000 UTC m=+0.121496096 container init 402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:12:56 compute-0 podman[143753]: 2025-12-09 16:12:56.72820954 +0000 UTC m=+0.127728659 container start 402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wilbur, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:12:56 compute-0 podman[143753]: 2025-12-09 16:12:56.731892822 +0000 UTC m=+0.131411961 container attach 402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Dec 09 16:12:56 compute-0 intelligent_wilbur[143793]: 167 167
Dec 09 16:12:56 compute-0 systemd[1]: libpod-402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1.scope: Deactivated successfully.
Dec 09 16:12:56 compute-0 podman[143753]: 2025-12-09 16:12:56.733691842 +0000 UTC m=+0.133210961 container died 402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wilbur, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 16:12:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-03e1453c5076848379d7649dcef7cfdffbc389025193f7e6705814a1427fcd12-merged.mount: Deactivated successfully.
Dec 09 16:12:56 compute-0 podman[143753]: 2025-12-09 16:12:56.770096383 +0000 UTC m=+0.169615502 container remove 402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wilbur, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:12:56 compute-0 systemd[1]: libpod-conmon-402777d5294990a0202c68fa292cf363307f959b4129ac0ed8d6c00ee91a15b1.scope: Deactivated successfully.
Dec 09 16:12:56 compute-0 sudo[143860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxxhxitzcrumawuatbsjzzejpyjutatf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296776.4222112-542-274909658993349/AnsiballZ_container_config_hash.py'
Dec 09 16:12:56 compute-0 sudo[143860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:56 compute-0 podman[143868]: 2025-12-09 16:12:56.925076088 +0000 UTC m=+0.042402829 container create aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:12:56 compute-0 systemd[1]: Started libpod-conmon-aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419.scope.
Dec 09 16:12:57 compute-0 podman[143868]: 2025-12-09 16:12:56.906215544 +0000 UTC m=+0.023542295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:12:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6c2167c5d5a4c140a48def58f4b20fd7c9e13bf2b1fbcf25c79febe3e9346f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:57 compute-0 python3.9[143862]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 16:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6c2167c5d5a4c140a48def58f4b20fd7c9e13bf2b1fbcf25c79febe3e9346f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6c2167c5d5a4c140a48def58f4b20fd7c9e13bf2b1fbcf25c79febe3e9346f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6c2167c5d5a4c140a48def58f4b20fd7c9e13bf2b1fbcf25c79febe3e9346f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:57 compute-0 sudo[143860]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:57 compute-0 podman[143868]: 2025-12-09 16:12:57.037176672 +0000 UTC m=+0.154503443 container init aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:12:57 compute-0 podman[143868]: 2025-12-09 16:12:57.044433643 +0000 UTC m=+0.161760394 container start aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:12:57 compute-0 podman[143868]: 2025-12-09 16:12:57.050148042 +0000 UTC m=+0.167474793 container attach aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tesla, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 16:12:57 compute-0 objective_tesla[143884]: {
Dec 09 16:12:57 compute-0 objective_tesla[143884]:     "0": [
Dec 09 16:12:57 compute-0 objective_tesla[143884]:         {
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "devices": [
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "/dev/loop3"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             ],
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_name": "ceph_lv0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_size": "21470642176",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "name": "ceph_lv0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "tags": {
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cluster_name": "ceph",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.crush_device_class": "",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.encrypted": "0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.objectstore": "bluestore",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osd_id": "0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.type": "block",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.vdo": "0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.with_tpm": "0"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             },
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "type": "block",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "vg_name": "ceph_vg0"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:         }
Dec 09 16:12:57 compute-0 objective_tesla[143884]:     ],
Dec 09 16:12:57 compute-0 objective_tesla[143884]:     "1": [
Dec 09 16:12:57 compute-0 objective_tesla[143884]:         {
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "devices": [
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "/dev/loop4"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             ],
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_name": "ceph_lv1",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_size": "21470642176",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "name": "ceph_lv1",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "tags": {
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cluster_name": "ceph",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.crush_device_class": "",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.encrypted": "0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.objectstore": "bluestore",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osd_id": "1",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.type": "block",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.vdo": "0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.with_tpm": "0"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             },
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "type": "block",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "vg_name": "ceph_vg1"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:         }
Dec 09 16:12:57 compute-0 objective_tesla[143884]:     ],
Dec 09 16:12:57 compute-0 objective_tesla[143884]:     "2": [
Dec 09 16:12:57 compute-0 objective_tesla[143884]:         {
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "devices": [
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "/dev/loop5"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             ],
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_name": "ceph_lv2",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_size": "21470642176",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "name": "ceph_lv2",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "tags": {
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.cluster_name": "ceph",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.crush_device_class": "",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.encrypted": "0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.objectstore": "bluestore",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osd_id": "2",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.type": "block",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.vdo": "0",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:                 "ceph.with_tpm": "0"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             },
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "type": "block",
Dec 09 16:12:57 compute-0 objective_tesla[143884]:             "vg_name": "ceph_vg2"
Dec 09 16:12:57 compute-0 objective_tesla[143884]:         }
Dec 09 16:12:57 compute-0 objective_tesla[143884]:     ]
Dec 09 16:12:57 compute-0 objective_tesla[143884]: }
Dec 09 16:12:57 compute-0 systemd[1]: libpod-aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419.scope: Deactivated successfully.
Dec 09 16:12:57 compute-0 podman[143868]: 2025-12-09 16:12:57.337961826 +0000 UTC m=+0.455288567 container died aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f6c2167c5d5a4c140a48def58f4b20fd7c9e13bf2b1fbcf25c79febe3e9346f-merged.mount: Deactivated successfully.
Dec 09 16:12:57 compute-0 ceph-mon[75222]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:57 compute-0 podman[143868]: 2025-12-09 16:12:57.387205154 +0000 UTC m=+0.504531895 container remove aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:12:57 compute-0 systemd[1]: libpod-conmon-aeb5ffaa0175ae5d3aa5e743f6f025d3914585b118e71cea864f71d6d2535419.scope: Deactivated successfully.
Dec 09 16:12:57 compute-0 sudo[143664]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:57 compute-0 sudo[143980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:12:57 compute-0 sudo[143980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:57 compute-0 sudo[143980]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:57 compute-0 sudo[144005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:12:57 compute-0 sudo[144005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:57 compute-0 sudo[144105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjquvaokbpzpxucnunlskhcgjnyobgxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296777.2521539-551-12913734540962/AnsiballZ_podman_container_info.py'
Dec 09 16:12:57 compute-0 sudo[144105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:57 compute-0 podman[144118]: 2025-12-09 16:12:57.819052899 +0000 UTC m=+0.038395127 container create f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_shaw, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 16:12:57 compute-0 systemd[1]: Started libpod-conmon-f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a.scope.
Dec 09 16:12:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:12:57 compute-0 podman[144118]: 2025-12-09 16:12:57.894776021 +0000 UTC m=+0.114118279 container init f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 09 16:12:57 compute-0 podman[144118]: 2025-12-09 16:12:57.802220942 +0000 UTC m=+0.021563190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:12:57 compute-0 podman[144118]: 2025-12-09 16:12:57.900549412 +0000 UTC m=+0.119891640 container start f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_shaw, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:12:57 compute-0 podman[144118]: 2025-12-09 16:12:57.903856124 +0000 UTC m=+0.123198372 container attach f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:12:57 compute-0 determined_shaw[144134]: 167 167
Dec 09 16:12:57 compute-0 systemd[1]: libpod-f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a.scope: Deactivated successfully.
Dec 09 16:12:57 compute-0 podman[144118]: 2025-12-09 16:12:57.9051507 +0000 UTC m=+0.124492928 container died f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_shaw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5508d6c7cc7638a4da2fd92dffd0b539b26582dc804870fa6deb3379dfdc14c-merged.mount: Deactivated successfully.
Dec 09 16:12:57 compute-0 podman[144118]: 2025-12-09 16:12:57.936016517 +0000 UTC m=+0.155358745 container remove f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:12:57 compute-0 python3.9[144116]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 09 16:12:57 compute-0 systemd[1]: libpod-conmon-f81c65a5b3ec998dae1e19bec7bb2a9ecf6db30bcc83ce1a6abc7a5c9912b39a.scope: Deactivated successfully.
Dec 09 16:12:58 compute-0 sudo[144105]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:58 compute-0 podman[144184]: 2025-12-09 16:12:58.081439886 +0000 UTC m=+0.043678174 container create 9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_borg, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:12:58 compute-0 systemd[1]: Started libpod-conmon-9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be.scope.
Dec 09 16:12:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f961854c3ede84c89488b239bd1c6bbe52b53848211ca70cebb651ced65e6a92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f961854c3ede84c89488b239bd1c6bbe52b53848211ca70cebb651ced65e6a92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f961854c3ede84c89488b239bd1c6bbe52b53848211ca70cebb651ced65e6a92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f961854c3ede84c89488b239bd1c6bbe52b53848211ca70cebb651ced65e6a92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:12:58 compute-0 podman[144184]: 2025-12-09 16:12:58.062268714 +0000 UTC m=+0.024507032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:12:58 compute-0 podman[144184]: 2025-12-09 16:12:58.164538894 +0000 UTC m=+0.126777212 container init 9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_borg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:12:58 compute-0 podman[144184]: 2025-12-09 16:12:58.172570457 +0000 UTC m=+0.134808755 container start 9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_borg, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:12:58 compute-0 podman[144184]: 2025-12-09 16:12:58.17661835 +0000 UTC m=+0.138856648 container attach 9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_borg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:12:58 compute-0 lvm[144356]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:12:58 compute-0 lvm[144355]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:12:58 compute-0 lvm[144356]: VG ceph_vg1 finished
Dec 09 16:12:58 compute-0 lvm[144355]: VG ceph_vg0 finished
Dec 09 16:12:58 compute-0 lvm[144358]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:12:58 compute-0 lvm[144358]: VG ceph_vg2 finished
Dec 09 16:12:58 compute-0 goofy_borg[144210]: {}
Dec 09 16:12:58 compute-0 systemd[1]: libpod-9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be.scope: Deactivated successfully.
Dec 09 16:12:58 compute-0 podman[144184]: 2025-12-09 16:12:58.906271277 +0000 UTC m=+0.868509605 container died 9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_borg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:12:58 compute-0 systemd[1]: libpod-9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be.scope: Consumed 1.205s CPU time.
Dec 09 16:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f961854c3ede84c89488b239bd1c6bbe52b53848211ca70cebb651ced65e6a92-merged.mount: Deactivated successfully.
Dec 09 16:12:58 compute-0 podman[144184]: 2025-12-09 16:12:58.964682739 +0000 UTC m=+0.926921037 container remove 9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_borg, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:12:58 compute-0 systemd[1]: libpod-conmon-9f8f9b007aabe0925ce6bfbdc429baf1ed8eb04593aacbb4664fdf4d5929a6be.scope: Deactivated successfully.
Dec 09 16:12:59 compute-0 sudo[144005]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:12:59 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:12:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:12:59 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:12:59 compute-0 sudo[144398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:12:59 compute-0 sudo[144398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:12:59 compute-0 sudo[144398]: pam_unix(sudo:session): session closed for user root
Dec 09 16:12:59 compute-0 sudo[144470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boosjmrbgbvsvzsejnukjrnjxttavcql ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765296778.6675992-564-81826612165799/AnsiballZ_edpm_container_manage.py'
Dec 09 16:12:59 compute-0 sudo[144470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:12:59 compute-0 ceph-mon[75222]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:12:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:12:59 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:12:59 compute-0 python3[144472]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 16:12:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:01 compute-0 ceph-mon[75222]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:03 compute-0 ceph-mon[75222]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:13:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2111 writes, 9283 keys, 2111 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2111 writes, 2111 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2111 writes, 9283 keys, 2111 commit groups, 1.0 writes per commit group, ingest: 12.23 MB, 0.02 MB/s
                                           Interval WAL: 2111 writes, 2111 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    121.9      0.07              0.03         3    0.024       0      0       0.0       0.0
                                             L6      1/0    6.84 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    145.7    128.4      0.11              0.05         2    0.055    7227    725       0.0       0.0
                                            Sum      1/0    6.84 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     88.2    125.8      0.18              0.08         5    0.036    7227    725       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     90.2    128.4      0.18              0.08         4    0.044    7227    725       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    145.7    128.4      0.11              0.05         2    0.055    7227    725       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    128.4      0.07              0.03         2    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ad05ef58d0#2 capacity: 308.00 MB usage: 655.55 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(38,563.77 KB,0.178751%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 09 16:13:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:04 compute-0 podman[144485]: 2025-12-09 16:13:04.749804715 +0000 UTC m=+5.286910529 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 09 16:13:04 compute-0 podman[144606]: 2025-12-09 16:13:04.886686927 +0000 UTC m=+0.046052491 container create 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:13:04 compute-0 podman[144606]: 2025-12-09 16:13:04.865533089 +0000 UTC m=+0.024898673 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 09 16:13:04 compute-0 python3[144472]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 09 16:13:05 compute-0 sudo[144470]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:05 compute-0 sshd-session[144596]: Invalid user dspace from 146.190.31.45 port 56852
Dec 09 16:13:05 compute-0 sshd-session[144596]: Connection closed by invalid user dspace 146.190.31.45 port 56852 [preauth]
Dec 09 16:13:05 compute-0 ceph-mon[75222]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:05 compute-0 sudo[144794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddsygetxqtvvghzuewlyvauqzcdfupnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296785.2722497-572-177210938093121/AnsiballZ_stat.py'
Dec 09 16:13:05 compute-0 sudo[144794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:05 compute-0 python3.9[144796]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:13:05 compute-0 sudo[144794]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:06 compute-0 sudo[144948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvqxelfowsuwogrmpnocqsvguyprpkbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296786.1911056-581-187508984934996/AnsiballZ_file.py'
Dec 09 16:13:06 compute-0 sudo[144948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:06 compute-0 python3.9[144950]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:06 compute-0 sudo[144948]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:06 compute-0 sudo[145024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihekkxhsnvguezwvjsjjpzrtzwndyazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296786.1911056-581-187508984934996/AnsiballZ_stat.py'
Dec 09 16:13:06 compute-0 sudo[145024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:07 compute-0 python3.9[145026]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:13:07 compute-0 sudo[145024]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:07 compute-0 ceph-mon[75222]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:07 compute-0 sudo[145175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxddozbmnwbcwagecvjtnbtneghtktxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296787.225387-581-12973695110229/AnsiballZ_copy.py'
Dec 09 16:13:07 compute-0 sudo[145175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:07 compute-0 python3.9[145177]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296787.225387-581-12973695110229/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:07 compute-0 sudo[145175]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:08 compute-0 sudo[145251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pusxjegzyhdiycgjhnughteqmtlcvzph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296787.225387-581-12973695110229/AnsiballZ_systemd.py'
Dec 09 16:13:08 compute-0 sudo[145251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:08 compute-0 python3.9[145253]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 16:13:08 compute-0 systemd[1]: Reloading.
Dec 09 16:13:08 compute-0 systemd-rc-local-generator[145281]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:13:08 compute-0 systemd-sysv-generator[145284]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:13:08 compute-0 ceph-mon[75222]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:08 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 09 16:13:08 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:08.989270) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:13:08 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 09 16:13:08 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296788989317, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 670, "num_deletes": 251, "total_data_size": 824016, "memory_usage": 836408, "flush_reason": "Manual Compaction"}
Dec 09 16:13:08 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296789001014, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 816892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9055, "largest_seqno": 9724, "table_properties": {"data_size": 813330, "index_size": 1406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7673, "raw_average_key_size": 18, "raw_value_size": 806275, "raw_average_value_size": 1942, "num_data_blocks": 65, "num_entries": 415, "num_filter_entries": 415, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296735, "oldest_key_time": 1765296735, "file_creation_time": 1765296788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 11792 microseconds, and 5183 cpu microseconds.
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.001066) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 816892 bytes OK
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.001085) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.002627) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.002641) EVENT_LOG_v1 {"time_micros": 1765296789002637, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.002657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 820491, prev total WAL file size 820491, number of live WAL files 2.
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.003269) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(797KB)], [23(7002KB)]
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296789003338, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7987910, "oldest_snapshot_seqno": -1}
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3311 keys, 6188032 bytes, temperature: kUnknown
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296789037275, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6188032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6163736, "index_size": 14907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80328, "raw_average_key_size": 24, "raw_value_size": 6101679, "raw_average_value_size": 1842, "num_data_blocks": 649, "num_entries": 3311, "num_filter_entries": 3311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765296789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.037573) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6188032 bytes
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.039307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.7 rd, 181.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.8 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(17.4) write-amplify(7.6) OK, records in: 3824, records dropped: 513 output_compression: NoCompression
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.039327) EVENT_LOG_v1 {"time_micros": 1765296789039317, "job": 8, "event": "compaction_finished", "compaction_time_micros": 34032, "compaction_time_cpu_micros": 13904, "output_level": 6, "num_output_files": 1, "total_output_size": 6188032, "num_input_records": 3824, "num_output_records": 3311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296789040289, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765296789041673, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.003074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.041801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.041810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.041815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.041820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:13:09 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:13:09.041825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:13:09 compute-0 sudo[145251]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:09 compute-0 sudo[145363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thptkigiftwenmeotkwwctctzosdvgwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296787.225387-581-12973695110229/AnsiballZ_systemd.py'
Dec 09 16:13:09 compute-0 sudo[145363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:09 compute-0 python3.9[145365]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:13:09 compute-0 systemd[1]: Reloading.
Dec 09 16:13:09 compute-0 systemd-sysv-generator[145396]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:13:09 compute-0 systemd-rc-local-generator[145392]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:13:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:10 compute-0 systemd[1]: Starting ovn_controller container...
Dec 09 16:13:10 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8a83efe91097139ff8420d1b99fa7334a2b40159947443bc0b52fb0b956608/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 09 16:13:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470.
Dec 09 16:13:10 compute-0 podman[145407]: 2025-12-09 16:13:10.274749973 +0000 UTC m=+0.123884922 container init 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + sudo -E kolla_set_configs
Dec 09 16:13:10 compute-0 podman[145407]: 2025-12-09 16:13:10.310010952 +0000 UTC m=+0.159145911 container start 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 09 16:13:10 compute-0 edpm-start-podman-container[145407]: ovn_controller
Dec 09 16:13:10 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 09 16:13:10 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 09 16:13:10 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 09 16:13:10 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 09 16:13:10 compute-0 edpm-start-podman-container[145406]: Creating additional drop-in dependency for "ovn_controller" (0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470)
Dec 09 16:13:10 compute-0 systemd[145462]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 09 16:13:10 compute-0 podman[145428]: 2025-12-09 16:13:10.393416369 +0000 UTC m=+0.072128805 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:13:10 compute-0 systemd[1]: 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470-46e7fcdfdf3aba34.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 16:13:10 compute-0 systemd[1]: 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470-46e7fcdfdf3aba34.service: Failed with result 'exit-code'.
Dec 09 16:13:10 compute-0 systemd[1]: Reloading.
Dec 09 16:13:10 compute-0 systemd-rc-local-generator[145510]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:13:10 compute-0 systemd-sysv-generator[145514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:13:10 compute-0 systemd[145462]: Queued start job for default target Main User Target.
Dec 09 16:13:10 compute-0 systemd[145462]: Created slice User Application Slice.
Dec 09 16:13:10 compute-0 systemd[145462]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 09 16:13:10 compute-0 systemd[145462]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 16:13:10 compute-0 systemd[145462]: Reached target Paths.
Dec 09 16:13:10 compute-0 systemd[145462]: Reached target Timers.
Dec 09 16:13:10 compute-0 systemd[145462]: Starting D-Bus User Message Bus Socket...
Dec 09 16:13:10 compute-0 systemd[145462]: Starting Create User's Volatile Files and Directories...
Dec 09 16:13:10 compute-0 systemd[145462]: Finished Create User's Volatile Files and Directories.
Dec 09 16:13:10 compute-0 systemd[145462]: Listening on D-Bus User Message Bus Socket.
Dec 09 16:13:10 compute-0 systemd[145462]: Reached target Sockets.
Dec 09 16:13:10 compute-0 systemd[145462]: Reached target Basic System.
Dec 09 16:13:10 compute-0 systemd[145462]: Reached target Main User Target.
Dec 09 16:13:10 compute-0 systemd[145462]: Startup finished in 139ms.
Dec 09 16:13:10 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 09 16:13:10 compute-0 systemd[1]: Started ovn_controller container.
Dec 09 16:13:10 compute-0 systemd[1]: Started Session c1 of User root.
Dec 09 16:13:10 compute-0 sudo[145363]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:10 compute-0 ovn_controller[145421]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 16:13:10 compute-0 ovn_controller[145421]: INFO:__main__:Validating config file
Dec 09 16:13:10 compute-0 ovn_controller[145421]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 16:13:10 compute-0 ovn_controller[145421]: INFO:__main__:Writing out command to execute
Dec 09 16:13:10 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 09 16:13:10 compute-0 ovn_controller[145421]: ++ cat /run_command
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + ARGS=
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + sudo kolla_copy_cacerts
Dec 09 16:13:10 compute-0 systemd[1]: Started Session c2 of User root.
Dec 09 16:13:10 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + [[ ! -n '' ]]
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + . kolla_extend_start
Dec 09 16:13:10 compute-0 ovn_controller[145421]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + umask 0022
Dec 09 16:13:10 compute-0 ovn_controller[145421]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <info>  [1765296790.9036] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <info>  [1765296790.9048] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <warn>  [1765296790.9051] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <info>  [1765296790.9064] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <info>  [1765296790.9074] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <info>  [1765296790.9081] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 09 16:13:10 compute-0 kernel: br-int: entered promiscuous mode
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 09 16:13:10 compute-0 ovn_controller[145421]: 2025-12-09T16:13:10Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <info>  [1765296790.9406] manager: (ovn-82cd4f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 09 16:13:10 compute-0 systemd-udevd[145578]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 16:13:10 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 09 16:13:10 compute-0 systemd-udevd[145579]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <info>  [1765296790.9586] device (genev_sys_6081): carrier: link connected
Dec 09 16:13:10 compute-0 NetworkManager[49021]: <info>  [1765296790.9589] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 09 16:13:11 compute-0 ceph-mon[75222]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:11 compute-0 sudo[145685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dclvgdsxzlljzefpsvgurwugmmmkvlts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296790.9589577-609-122894666502493/AnsiballZ_command.py'
Dec 09 16:13:11 compute-0 sudo[145685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:11 compute-0 python3.9[145687]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:13:11 compute-0 ovs-vsctl[145688]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 09 16:13:11 compute-0 sudo[145685]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:11 compute-0 sudo[145838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxtsjudibslncyansbgqhxsultszbwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296791.6788473-617-26505053490580/AnsiballZ_command.py'
Dec 09 16:13:11 compute-0 sudo[145838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:12 compute-0 python3.9[145840]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:13:12 compute-0 ovs-vsctl[145842]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 09 16:13:12 compute-0 sudo[145838]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:12 compute-0 sudo[145993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trrlwwnsftsaldvlxbuecmeboxegydwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296792.5025797-631-62800828489299/AnsiballZ_command.py'
Dec 09 16:13:12 compute-0 sudo[145993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:12 compute-0 python3.9[145995]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:13:12 compute-0 ovs-vsctl[145996]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 09 16:13:13 compute-0 sudo[145993]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:13 compute-0 ceph-mon[75222]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:13 compute-0 sshd-session[134208]: Connection closed by 192.168.122.30 port 44806
Dec 09 16:13:13 compute-0 sshd-session[134205]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:13:13 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 09 16:13:13 compute-0 systemd[1]: session-46.scope: Consumed 57.965s CPU time.
Dec 09 16:13:13 compute-0 systemd-logind[786]: Session 46 logged out. Waiting for processes to exit.
Dec 09 16:13:13 compute-0 systemd-logind[786]: Removed session 46.
Dec 09 16:13:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:15 compute-0 ceph-mon[75222]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:17 compute-0 ceph-mon[75222]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:19 compute-0 sshd-session[146021]: Accepted publickey for zuul from 192.168.122.30 port 56210 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:13:19 compute-0 systemd-logind[786]: New session 48 of user zuul.
Dec 09 16:13:19 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 09 16:13:19 compute-0 sshd-session[146021]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:13:19 compute-0 ceph-mon[75222]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:20 compute-0 python3.9[146174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:13:21 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 09 16:13:21 compute-0 systemd[145462]: Activating special unit Exit the Session...
Dec 09 16:13:21 compute-0 systemd[145462]: Stopped target Main User Target.
Dec 09 16:13:21 compute-0 systemd[145462]: Stopped target Basic System.
Dec 09 16:13:21 compute-0 systemd[145462]: Stopped target Paths.
Dec 09 16:13:21 compute-0 systemd[145462]: Stopped target Sockets.
Dec 09 16:13:21 compute-0 systemd[145462]: Stopped target Timers.
Dec 09 16:13:21 compute-0 systemd[145462]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 09 16:13:21 compute-0 systemd[145462]: Closed D-Bus User Message Bus Socket.
Dec 09 16:13:21 compute-0 systemd[145462]: Stopped Create User's Volatile Files and Directories.
Dec 09 16:13:21 compute-0 systemd[145462]: Removed slice User Application Slice.
Dec 09 16:13:21 compute-0 systemd[145462]: Reached target Shutdown.
Dec 09 16:13:21 compute-0 systemd[145462]: Finished Exit the Session.
Dec 09 16:13:21 compute-0 systemd[145462]: Reached target Exit the Session.
Dec 09 16:13:21 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 09 16:13:21 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 09 16:13:21 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 09 16:13:21 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 09 16:13:21 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 09 16:13:21 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 09 16:13:21 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 09 16:13:21 compute-0 ceph-mon[75222]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:21 compute-0 sudo[146330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eklwhuftauatrhelvndmdzjqoluntzcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296800.8110583-34-73328744806084/AnsiballZ_file.py'
Dec 09 16:13:21 compute-0 sudo[146330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:21 compute-0 python3.9[146332]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:21 compute-0 sudo[146330]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:21 compute-0 sudo[146482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnbqqkrlwiszqafrwrulhempgaehsaoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296801.5861993-34-85024677887352/AnsiballZ_file.py'
Dec 09 16:13:21 compute-0 sudo[146482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:22 compute-0 python3.9[146484]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:22 compute-0 sudo[146482]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:22 compute-0 sudo[146634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlpqeswuvjxtglfoirfgvnfnypdzvzph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296802.227824-34-241450356067366/AnsiballZ_file.py'
Dec 09 16:13:22 compute-0 sudo[146634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:22 compute-0 python3.9[146636]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:22 compute-0 sudo[146634]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:23 compute-0 sudo[146786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbgwvthogaqowyvwoscoccxckxzhbczg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296802.8923752-34-267475997252052/AnsiballZ_file.py'
Dec 09 16:13:23 compute-0 sudo[146786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:23 compute-0 ceph-mon[75222]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:23 compute-0 python3.9[146788]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:23 compute-0 sudo[146786]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:23 compute-0 sudo[146938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfxbdvkiautmljrxllysnmlnwqanyekl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296803.4691322-34-80303403164119/AnsiballZ_file.py'
Dec 09 16:13:23 compute-0 sudo[146938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:23 compute-0 python3.9[146940]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:23 compute-0 sudo[146938]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:24 compute-0 python3.9[147090]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:13:25 compute-0 ceph-mon[75222]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:25 compute-0 sudo[147240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlmtrqlimpqoiymkxjdabywdnfavjffo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296804.8897984-78-275708617357804/AnsiballZ_seboolean.py'
Dec 09 16:13:25 compute-0 sudo[147240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:25 compute-0 python3.9[147242]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 09 16:13:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:13:25
Dec 09 16:13:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:13:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:13:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 09 16:13:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:26 compute-0 sudo[147240]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:13:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:13:26 compute-0 python3.9[147392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:27 compute-0 ceph-mon[75222]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:27 compute-0 python3.9[147513]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296806.3517323-86-53230757062927/.source follow=False _original_basename=haproxy.j2 checksum=d225e0e1c34f765c55f17e757e326dba55238d01 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:28 compute-0 python3.9[147663]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:28 compute-0 python3.9[147784]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296807.7993128-101-10521340616073/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:29 compute-0 sudo[147935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jliphowiyzogoufblwstkkbwyzblonoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296809.0233216-118-146151579822955/AnsiballZ_setup.py'
Dec 09 16:13:29 compute-0 sudo[147935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:29 compute-0 ceph-mon[75222]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:29 compute-0 python3.9[147937]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:13:29 compute-0 sudo[147935]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:30 compute-0 sudo[148019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rphlrqpjrhzljwjwiqgmlosdkytwdzim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296809.0233216-118-146151579822955/AnsiballZ_dnf.py'
Dec 09 16:13:30 compute-0 sudo[148019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:30 compute-0 python3.9[148021]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:13:31 compute-0 ceph-mon[75222]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:31 compute-0 sudo[148019]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:32 compute-0 sudo[148172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rowroxgljbcrsrlekybizgdzimhsrbug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296811.9638138-130-271001562350209/AnsiballZ_systemd.py'
Dec 09 16:13:32 compute-0 sudo[148172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:32 compute-0 python3.9[148174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 16:13:32 compute-0 sudo[148172]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:33 compute-0 ceph-mon[75222]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:33 compute-0 python3.9[148327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:34 compute-0 python3.9[148448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296813.143078-138-156070065511996/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:34 compute-0 python3.9[148598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:35 compute-0 python3.9[148719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296814.30214-138-210164664373714/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:35 compute-0 ceph-mon[75222]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:36 compute-0 python3.9[148869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:13:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:13:37 compute-0 python3.9[148990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296816.0654972-182-262462241486718/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:37 compute-0 ceph-mon[75222]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:37 compute-0 python3.9[149140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:38 compute-0 python3.9[149261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296817.1922202-182-39688856647681/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:38 compute-0 python3.9[149411]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:13:39 compute-0 sudo[149563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrdoggsxgtmssmawrzwdftnhqlpbbssq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296819.0945098-220-113511703734126/AnsiballZ_file.py'
Dec 09 16:13:39 compute-0 sudo[149563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:39 compute-0 python3.9[149565]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:39 compute-0 sudo[149563]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:40 compute-0 sudo[149715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdwlerycoahplxhnmrfdqapvgsutwoip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296819.7668455-228-138459247295293/AnsiballZ_stat.py'
Dec 09 16:13:40 compute-0 sudo[149715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:40 compute-0 ceph-mon[75222]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:40 compute-0 python3.9[149717]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:40 compute-0 sudo[149715]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:40 compute-0 ovn_controller[145421]: 2025-12-09T16:13:40Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Dec 09 16:13:40 compute-0 ovn_controller[145421]: 2025-12-09T16:13:40Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 09 16:13:40 compute-0 podman[149721]: 2025-12-09 16:13:40.656492143 +0000 UTC m=+0.096842971 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:13:40 compute-0 sudo[149819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlrnvejwqlrxjqdibepwrtlzrzdvcblf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296819.7668455-228-138459247295293/AnsiballZ_file.py'
Dec 09 16:13:40 compute-0 sudo[149819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:40 compute-0 python3.9[149821]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:40 compute-0 sudo[149819]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:41 compute-0 sudo[149971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmcrqavzugznpqynrcpwrapnbilfzcwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296821.1384196-228-11628125239702/AnsiballZ_stat.py'
Dec 09 16:13:41 compute-0 sudo[149971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:41 compute-0 ceph-mon[75222]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:41 compute-0 python3.9[149973]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:41 compute-0 sudo[149971]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:41 compute-0 sudo[150051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlroxrcomrqliwbiqdidpzjeqvgdlrpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296821.1384196-228-11628125239702/AnsiballZ_file.py'
Dec 09 16:13:41 compute-0 sudo[150051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:42 compute-0 python3.9[150053]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:42 compute-0 sudo[150051]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:42 compute-0 sudo[150203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkvohocrmzbyjkoglxsamhclnzfpodnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296822.2312133-251-58623674424309/AnsiballZ_file.py'
Dec 09 16:13:42 compute-0 sudo[150203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:42 compute-0 python3.9[150205]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:42 compute-0 sudo[150203]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:43 compute-0 sudo[150355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtktssklszguvuaoaskhsfxvjedvgfyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296822.9063008-259-100490061969250/AnsiballZ_stat.py'
Dec 09 16:13:43 compute-0 sudo[150355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:43 compute-0 python3.9[150357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:43 compute-0 sudo[150355]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:43 compute-0 ceph-mon[75222]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:43 compute-0 sshd-session[149976]: Received disconnect from 58.82.169.249 port 48298:11:  [preauth]
Dec 09 16:13:43 compute-0 sshd-session[149976]: Disconnected from authenticating user root 58.82.169.249 port 48298 [preauth]
Dec 09 16:13:43 compute-0 sudo[150434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oioiozttnywyreqrgjvsnmxurrfefqaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296822.9063008-259-100490061969250/AnsiballZ_file.py'
Dec 09 16:13:43 compute-0 sudo[150434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:43 compute-0 python3.9[150436]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:43 compute-0 sudo[150434]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:44 compute-0 sudo[150586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpoccpdugntjoilwweqssuxuzuslctim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296824.0974395-271-70000128795830/AnsiballZ_stat.py'
Dec 09 16:13:44 compute-0 sudo[150586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:44 compute-0 python3.9[150588]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:44 compute-0 sudo[150586]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:44 compute-0 sudo[150664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irzeebcabnerhhihlwyyhcyzxpqqirha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296824.0974395-271-70000128795830/AnsiballZ_file.py'
Dec 09 16:13:44 compute-0 sudo[150664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:45 compute-0 python3.9[150666]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:45 compute-0 sudo[150664]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:45 compute-0 sudo[150816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmltgymbbkpxvlnrhwnovjdzifwlitnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296825.258603-283-29404462641067/AnsiballZ_systemd.py'
Dec 09 16:13:45 compute-0 sudo[150816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:45 compute-0 ceph-mon[75222]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:46 compute-0 python3.9[150818]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:13:46 compute-0 systemd[1]: Reloading.
Dec 09 16:13:46 compute-0 systemd-rc-local-generator[150845]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:13:46 compute-0 systemd-sysv-generator[150849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:13:46 compute-0 sudo[150816]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:46 compute-0 ceph-mon[75222]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:47 compute-0 sudo[151005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uprhltoufkghhexcolgutnuuobmobaht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296826.71235-291-5688999956233/AnsiballZ_stat.py'
Dec 09 16:13:47 compute-0 sudo[151005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:47 compute-0 python3.9[151007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:47 compute-0 sudo[151005]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:47 compute-0 sudo[151083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubobsorixwwgmontflsvbmikqnlgjfvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296826.71235-291-5688999956233/AnsiballZ_file.py'
Dec 09 16:13:47 compute-0 sudo[151083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:47 compute-0 python3.9[151085]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:47 compute-0 sudo[151083]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:48 compute-0 sudo[151235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vekwvbndvvierhugwovdbsnafcceuxms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296827.9041798-303-262619795040758/AnsiballZ_stat.py'
Dec 09 16:13:48 compute-0 sudo[151235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:48 compute-0 python3.9[151237]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:48 compute-0 sudo[151235]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:48 compute-0 sshd-session[151238]: Invalid user dspace from 146.190.31.45 port 54982
Dec 09 16:13:48 compute-0 sshd-session[151238]: Connection closed by invalid user dspace 146.190.31.45 port 54982 [preauth]
Dec 09 16:13:48 compute-0 ceph-mon[75222]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:48 compute-0 sudo[151315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpfxpilysnggsphvghgjcyixbfpwnogg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296827.9041798-303-262619795040758/AnsiballZ_file.py'
Dec 09 16:13:48 compute-0 sudo[151315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:49 compute-0 python3.9[151317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:49 compute-0 sudo[151315]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:49 compute-0 sudo[151467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcymkopsrxobpbjoskjolnumzrrgehka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296829.2757585-315-144848642530748/AnsiballZ_systemd.py'
Dec 09 16:13:49 compute-0 sudo[151467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:49 compute-0 python3.9[151469]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:13:49 compute-0 systemd[1]: Reloading.
Dec 09 16:13:50 compute-0 systemd-rc-local-generator[151496]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:13:50 compute-0 systemd-sysv-generator[151500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:13:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:50 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 16:13:50 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 16:13:50 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 16:13:50 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 16:13:50 compute-0 sudo[151467]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:50 compute-0 sudo[151661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpwmqqbpgwmxhqtxzobwrwdxmktuqcja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296830.589591-325-4197593801399/AnsiballZ_file.py'
Dec 09 16:13:50 compute-0 sudo[151661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:51 compute-0 python3.9[151663]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:51 compute-0 sudo[151661]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:51 compute-0 ceph-mon[75222]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:51 compute-0 sudo[151813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqfisbnrblbtvkunlthtszahtnlietyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296831.323016-333-1318699261370/AnsiballZ_stat.py'
Dec 09 16:13:51 compute-0 sudo[151813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:51 compute-0 python3.9[151815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:51 compute-0 sudo[151813]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:52 compute-0 sudo[151936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghklgobrjusdjaxndavhgyroyyfvipnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296831.323016-333-1318699261370/AnsiballZ_copy.py'
Dec 09 16:13:52 compute-0 sudo[151936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:52 compute-0 python3.9[151938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765296831.323016-333-1318699261370/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:52 compute-0 sudo[151936]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:53 compute-0 sudo[152088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qztjrxpihuhjjtznetzqwllryatarapj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296832.6770434-350-222701078255196/AnsiballZ_file.py'
Dec 09 16:13:53 compute-0 sudo[152088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:53 compute-0 python3.9[152090]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:13:53 compute-0 sudo[152088]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:53 compute-0 ceph-mon[75222]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:53 compute-0 sudo[152240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgbpothototakdnfuunuspzkfmoemkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296833.4904153-358-152770729632918/AnsiballZ_stat.py'
Dec 09 16:13:53 compute-0 sudo[152240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:53 compute-0 python3.9[152242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:13:53 compute-0 sudo[152240]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:54 compute-0 sudo[152363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tywvbwulagahowbvtgkixlizoaygmcne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296833.4904153-358-152770729632918/AnsiballZ_copy.py'
Dec 09 16:13:54 compute-0 sudo[152363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:54 compute-0 python3.9[152365]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765296833.4904153-358-152770729632918/.source.json _original_basename=.sqoexs32 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:54 compute-0 sudo[152363]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:55 compute-0 sudo[152515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtgjcythmpxsllyudnfpphqgjibrqzdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296834.7258964-373-217216935838926/AnsiballZ_file.py'
Dec 09 16:13:55 compute-0 sudo[152515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:55 compute-0 python3.9[152517]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:13:55 compute-0 sudo[152515]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:55 compute-0 ceph-mon[75222]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:55 compute-0 sudo[152667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdfjoysfdcyjveiqejfbbuokjsmlgtsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296835.5332196-381-208057099966056/AnsiballZ_stat.py'
Dec 09 16:13:55 compute-0 sudo[152667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:56 compute-0 sudo[152667]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:56 compute-0 sudo[152790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxjiltweqqlmxkvymizhdnpijkuvimgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296835.5332196-381-208057099966056/AnsiballZ_copy.py'
Dec 09 16:13:56 compute-0 sudo[152790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:13:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:13:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:13:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:13:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:13:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:13:56 compute-0 sudo[152790]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:57 compute-0 sudo[152942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmxpozxdjphfnwrsimlvbzuwkgmdemcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296837.0002375-398-107210123235338/AnsiballZ_container_config_data.py'
Dec 09 16:13:57 compute-0 sudo[152942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:57 compute-0 python3.9[152944]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 09 16:13:57 compute-0 sudo[152942]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:57 compute-0 ceph-mon[75222]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:58 compute-0 sudo[153094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbaqxabwwfteinwtlresacmgerhwpivc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296837.8897736-407-186965899634090/AnsiballZ_container_config_hash.py'
Dec 09 16:13:58 compute-0 sudo[153094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:58 compute-0 python3.9[153096]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 16:13:58 compute-0 sudo[153094]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:59 compute-0 sudo[153173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:13:59 compute-0 sudo[153173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:13:59 compute-0 sudo[153173]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:59 compute-0 sudo[153221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 09 16:13:59 compute-0 sudo[153221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:13:59 compute-0 sudo[153296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjoskxnzbrjjilhugjsqqxcnpclvntrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296838.8638632-416-40998615089952/AnsiballZ_podman_container_info.py'
Dec 09 16:13:59 compute-0 sudo[153296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:13:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:13:59 compute-0 ceph-mon[75222]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:13:59 compute-0 python3.9[153298]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 09 16:13:59 compute-0 sudo[153221]: pam_unix(sudo:session): session closed for user root
Dec 09 16:13:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:13:59 compute-0 sudo[153296]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:14:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:00 compute-0 sudo[153421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:14:00 compute-0 sudo[153421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:00 compute-0 sudo[153421]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:00 compute-0 sudo[153446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:14:00 compute-0 sudo[153446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:00 compute-0 sudo[153558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdfolzsauutuvesyrrbwxgeelstxpigg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765296840.327459-429-240064039844936/AnsiballZ_edpm_container_manage.py'
Dec 09 16:14:00 compute-0 sudo[153558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:01 compute-0 sudo[153446]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:01 compute-0 python3[153561]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 16:14:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:01 compute-0 ceph-mon[75222]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:14:01 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:14:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:14:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:14:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:14:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:14:01 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:14:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:14:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:14:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:14:01 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:14:01 compute-0 sudo[153595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:14:01 compute-0 sudo[153595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:01 compute-0 sudo[153595]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:01 compute-0 sudo[153624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:14:01 compute-0 sudo[153624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:01 compute-0 podman[153664]: 2025-12-09 16:14:01.563320965 +0000 UTC m=+0.072974806 container create 782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moore, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:14:01 compute-0 systemd[1]: Started libpod-conmon-782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021.scope.
Dec 09 16:14:01 compute-0 podman[153664]: 2025-12-09 16:14:01.513006427 +0000 UTC m=+0.022660288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:14:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:14:01 compute-0 podman[153664]: 2025-12-09 16:14:01.654357986 +0000 UTC m=+0.164011877 container init 782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moore, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:14:01 compute-0 podman[153664]: 2025-12-09 16:14:01.665581856 +0000 UTC m=+0.175235737 container start 782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:14:01 compute-0 podman[153664]: 2025-12-09 16:14:01.669738795 +0000 UTC m=+0.179392636 container attach 782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:14:01 compute-0 xenodochial_moore[153680]: 167 167
Dec 09 16:14:01 compute-0 systemd[1]: libpod-782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021.scope: Deactivated successfully.
Dec 09 16:14:01 compute-0 podman[153664]: 2025-12-09 16:14:01.671206987 +0000 UTC m=+0.180860868 container died 782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moore, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a9d94dc77bf4b3dcc233b11e328f2b72d39006c2f4584989a81ca32dd9869a3-merged.mount: Deactivated successfully.
Dec 09 16:14:01 compute-0 podman[153664]: 2025-12-09 16:14:01.711974652 +0000 UTC m=+0.221628493 container remove 782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:14:01 compute-0 systemd[1]: libpod-conmon-782a2678c9038d6f098b779329adbc73f9d08a08dcf9fb5f2d35082dbc7de021.scope: Deactivated successfully.
Dec 09 16:14:02 compute-0 podman[153709]: 2025-12-09 16:14:01.914204919 +0000 UTC m=+0.028497715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:14:02 compute-0 podman[153709]: 2025-12-09 16:14:02.099964946 +0000 UTC m=+0.214257692 container create d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclaren, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:14:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:14:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:14:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:14:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:14:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:14:02 compute-0 systemd[1]: Started libpod-conmon-d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312.scope.
Dec 09 16:14:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b669c79e72f5ff1639fd764b375fb6f4ce8573f4e33322114641d7b97caa8769/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b669c79e72f5ff1639fd764b375fb6f4ce8573f4e33322114641d7b97caa8769/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b669c79e72f5ff1639fd764b375fb6f4ce8573f4e33322114641d7b97caa8769/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b669c79e72f5ff1639fd764b375fb6f4ce8573f4e33322114641d7b97caa8769/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b669c79e72f5ff1639fd764b375fb6f4ce8573f4e33322114641d7b97caa8769/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:02 compute-0 podman[153709]: 2025-12-09 16:14:02.649139355 +0000 UTC m=+0.763432121 container init d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:14:02 compute-0 podman[153709]: 2025-12-09 16:14:02.657829263 +0000 UTC m=+0.772122009 container start d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:14:03 compute-0 epic_mclaren[153744]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:14:03 compute-0 epic_mclaren[153744]: --> All data devices are unavailable
Dec 09 16:14:03 compute-0 systemd[1]: libpod-d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312.scope: Deactivated successfully.
Dec 09 16:14:03 compute-0 podman[153709]: 2025-12-09 16:14:03.257684731 +0000 UTC m=+1.371977487 container attach d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclaren, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:14:03 compute-0 podman[153709]: 2025-12-09 16:14:03.258683829 +0000 UTC m=+1.372976575 container died d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b669c79e72f5ff1639fd764b375fb6f4ce8573f4e33322114641d7b97caa8769-merged.mount: Deactivated successfully.
Dec 09 16:14:03 compute-0 podman[153709]: 2025-12-09 16:14:03.335889675 +0000 UTC m=+1.450182421 container remove d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:14:03 compute-0 systemd[1]: libpod-conmon-d41d405250dcc3042e2b7634d2865064b7a45da141d6ac62d3ac729ceeace312.scope: Deactivated successfully.
Dec 09 16:14:03 compute-0 sudo[153624]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:03 compute-0 sudo[153788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:14:03 compute-0 sudo[153788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:03 compute-0 sudo[153788]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:03 compute-0 sudo[153818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:14:03 compute-0 sudo[153818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:03 compute-0 ceph-mon[75222]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:04 compute-0 ceph-mon[75222]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:07 compute-0 ceph-mon[75222]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:09 compute-0 ceph-mon[75222]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:09 compute-0 podman[153876]: 2025-12-09 16:14:09.88958876 +0000 UTC m=+1.121593134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:14:09 compute-0 podman[153876]: 2025-12-09 16:14:09.927890144 +0000 UTC m=+1.159894458 container create 3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:14:09 compute-0 podman[153589]: 2025-12-09 16:14:09.958367935 +0000 UTC m=+8.806493716 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 09 16:14:09 compute-0 systemd[1]: Started libpod-conmon-3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b.scope.
Dec 09 16:14:10 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:14:10 compute-0 podman[153876]: 2025-12-09 16:14:10.025302227 +0000 UTC m=+1.257306601 container init 3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:14:10 compute-0 podman[153876]: 2025-12-09 16:14:10.0334442 +0000 UTC m=+1.265448524 container start 3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:14:10 compute-0 podman[153876]: 2025-12-09 16:14:10.037543847 +0000 UTC m=+1.269548141 container attach 3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:14:10 compute-0 vigorous_hawking[153922]: 167 167
Dec 09 16:14:10 compute-0 systemd[1]: libpod-3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b.scope: Deactivated successfully.
Dec 09 16:14:10 compute-0 podman[153876]: 2025-12-09 16:14:10.0408243 +0000 UTC m=+1.272828624 container died 3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffbc5c1597c629616ca8339105c85985cc26800f741abb009477a0ba29685586-merged.mount: Deactivated successfully.
Dec 09 16:14:10 compute-0 podman[153876]: 2025-12-09 16:14:10.097993334 +0000 UTC m=+1.329997628 container remove 3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:14:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:10 compute-0 systemd[1]: libpod-conmon-3c003c9b27d0188009dca49bc6d3441a37af9662a524d0e0db0c34eabc688f5b.scope: Deactivated successfully.
Dec 09 16:14:10 compute-0 podman[153956]: 2025-12-09 16:14:10.13602854 +0000 UTC m=+0.050177744 container create fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 09 16:14:10 compute-0 podman[153956]: 2025-12-09 16:14:10.109369979 +0000 UTC m=+0.023519193 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 09 16:14:10 compute-0 python3[153561]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 09 16:14:10 compute-0 sudo[153558]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:10 compute-0 podman[154008]: 2025-12-09 16:14:10.295918628 +0000 UTC m=+0.048861176 container create c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:14:10 compute-0 systemd[1]: Started libpod-conmon-c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f.scope.
Dec 09 16:14:10 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b343a4f0c6f8bdad9e7e29aba471128027bf2084bc3f6536b2d81719349b0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b343a4f0c6f8bdad9e7e29aba471128027bf2084bc3f6536b2d81719349b0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b343a4f0c6f8bdad9e7e29aba471128027bf2084bc3f6536b2d81719349b0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b343a4f0c6f8bdad9e7e29aba471128027bf2084bc3f6536b2d81719349b0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:10 compute-0 podman[154008]: 2025-12-09 16:14:10.275556267 +0000 UTC m=+0.028498855 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:14:10 compute-0 podman[154008]: 2025-12-09 16:14:10.38104134 +0000 UTC m=+0.133983888 container init c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_driscoll, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:14:10 compute-0 podman[154008]: 2025-12-09 16:14:10.395862154 +0000 UTC m=+0.148804662 container start c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_driscoll, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:14:10 compute-0 podman[154008]: 2025-12-09 16:14:10.399068715 +0000 UTC m=+0.152011303 container attach c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 09 16:14:10 compute-0 happy_driscoll[154046]: {
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:     "0": [
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:         {
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "devices": [
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "/dev/loop3"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             ],
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_name": "ceph_lv0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_size": "21470642176",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "name": "ceph_lv0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "tags": {
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cluster_name": "ceph",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.crush_device_class": "",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.encrypted": "0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.objectstore": "bluestore",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osd_id": "0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.type": "block",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.vdo": "0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.with_tpm": "0"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             },
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "type": "block",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "vg_name": "ceph_vg0"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:         }
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:     ],
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:     "1": [
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:         {
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "devices": [
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "/dev/loop4"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             ],
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_name": "ceph_lv1",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_size": "21470642176",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "name": "ceph_lv1",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "tags": {
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cluster_name": "ceph",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.crush_device_class": "",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.encrypted": "0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.objectstore": "bluestore",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osd_id": "1",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.type": "block",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.vdo": "0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.with_tpm": "0"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             },
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "type": "block",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "vg_name": "ceph_vg1"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:         }
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:     ],
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:     "2": [
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:         {
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "devices": [
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "/dev/loop5"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             ],
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_name": "ceph_lv2",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_size": "21470642176",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "name": "ceph_lv2",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "tags": {
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.cluster_name": "ceph",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.crush_device_class": "",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.encrypted": "0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.objectstore": "bluestore",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osd_id": "2",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.type": "block",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.vdo": "0",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:                 "ceph.with_tpm": "0"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             },
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "type": "block",
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:             "vg_name": "ceph_vg2"
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:         }
Dec 09 16:14:10 compute-0 happy_driscoll[154046]:     ]
Dec 09 16:14:10 compute-0 happy_driscoll[154046]: }
Dec 09 16:14:10 compute-0 systemd[1]: libpod-c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f.scope: Deactivated successfully.
Dec 09 16:14:10 compute-0 podman[154008]: 2025-12-09 16:14:10.790048556 +0000 UTC m=+0.542991084 container died c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b343a4f0c6f8bdad9e7e29aba471128027bf2084bc3f6536b2d81719349b0a-merged.mount: Deactivated successfully.
Dec 09 16:14:10 compute-0 sudo[154206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxxaxbsufeqybpyffmdvarhxkcnclokv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296850.4511833-437-247455182433021/AnsiballZ_stat.py'
Dec 09 16:14:10 compute-0 sudo[154206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:10 compute-0 podman[154008]: 2025-12-09 16:14:10.841144275 +0000 UTC m=+0.594086803 container remove c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:14:10 compute-0 systemd[1]: libpod-conmon-c61add5976071f35fb2376a0d526348331c72355d22c07aca9f33b0761eeed8f.scope: Deactivated successfully.
Dec 09 16:14:10 compute-0 sudo[153818]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:10 compute-0 ceph-mon[75222]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:10 compute-0 podman[154156]: 2025-12-09 16:14:10.9137613 +0000 UTC m=+0.156100821 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 09 16:14:10 compute-0 sudo[154224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:14:10 compute-0 sudo[154224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:10 compute-0 sudo[154224]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:10 compute-0 sudo[154249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:14:11 compute-0 sudo[154249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:11 compute-0 python3.9[154217]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:14:11 compute-0 sudo[154206]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:11 compute-0 podman[154312]: 2025-12-09 16:14:11.338153915 +0000 UTC m=+0.084941968 container create f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:14:11 compute-0 podman[154312]: 2025-12-09 16:14:11.275962688 +0000 UTC m=+0.022750751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:14:11 compute-0 systemd[1]: Started libpod-conmon-f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078.scope.
Dec 09 16:14:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:14:11 compute-0 podman[154312]: 2025-12-09 16:14:11.425657284 +0000 UTC m=+0.172445357 container init f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:14:11 compute-0 podman[154312]: 2025-12-09 16:14:11.432754217 +0000 UTC m=+0.179542260 container start f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:14:11 compute-0 podman[154312]: 2025-12-09 16:14:11.436032481 +0000 UTC m=+0.182820554 container attach f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:14:11 compute-0 boring_kowalevski[154352]: 167 167
Dec 09 16:14:11 compute-0 systemd[1]: libpod-f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078.scope: Deactivated successfully.
Dec 09 16:14:11 compute-0 podman[154312]: 2025-12-09 16:14:11.438677546 +0000 UTC m=+0.185465599 container died f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 09 16:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-28bc40e29f199fd1b929720f2f5e2fc6f3569d59ead784be475090a16603310b-merged.mount: Deactivated successfully.
Dec 09 16:14:11 compute-0 podman[154312]: 2025-12-09 16:14:11.477476405 +0000 UTC m=+0.224264448 container remove f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kowalevski, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:14:11 compute-0 systemd[1]: libpod-conmon-f36fb870e655b39038fb3467fcd8df9090a4c695014781c57a5bfafeea27e078.scope: Deactivated successfully.
Dec 09 16:14:11 compute-0 podman[154430]: 2025-12-09 16:14:11.629265991 +0000 UTC m=+0.040553419 container create 048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:14:11 compute-0 systemd[1]: Started libpod-conmon-048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426.scope.
Dec 09 16:14:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:14:11 compute-0 podman[154430]: 2025-12-09 16:14:11.610070563 +0000 UTC m=+0.021358011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:14:11 compute-0 sudo[154496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woxssllpwgircoiggxoufrfnjvrlpxtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296851.3928854-446-53966278644356/AnsiballZ_file.py'
Dec 09 16:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ad9e6bbb679a5922aca76b6b460dd5db3e1cb4cbaccccae7b86f19b26f7ddb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:11 compute-0 sudo[154496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ad9e6bbb679a5922aca76b6b460dd5db3e1cb4cbaccccae7b86f19b26f7ddb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ad9e6bbb679a5922aca76b6b460dd5db3e1cb4cbaccccae7b86f19b26f7ddb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ad9e6bbb679a5922aca76b6b460dd5db3e1cb4cbaccccae7b86f19b26f7ddb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:11 compute-0 podman[154430]: 2025-12-09 16:14:11.723138403 +0000 UTC m=+0.134425841 container init 048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:14:11 compute-0 podman[154430]: 2025-12-09 16:14:11.72931962 +0000 UTC m=+0.140607048 container start 048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:14:11 compute-0 podman[154430]: 2025-12-09 16:14:11.73317367 +0000 UTC m=+0.144461098 container attach 048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:14:11 compute-0 python3.9[154498]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:11 compute-0 sudo[154496]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:12 compute-0 sudo[154606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itochldylcqsqrbivifkhdyoutgztnwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296851.3928854-446-53966278644356/AnsiballZ_stat.py'
Dec 09 16:14:12 compute-0 sudo[154606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:12 compute-0 python3.9[154613]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:14:12 compute-0 sudo[154606]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:12 compute-0 lvm[154650]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:14:12 compute-0 lvm[154650]: VG ceph_vg0 finished
Dec 09 16:14:12 compute-0 lvm[154651]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:14:12 compute-0 lvm[154651]: VG ceph_vg1 finished
Dec 09 16:14:12 compute-0 lvm[154661]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:14:12 compute-0 lvm[154661]: VG ceph_vg2 finished
Dec 09 16:14:12 compute-0 clever_hodgkin[154488]: {}
Dec 09 16:14:12 compute-0 systemd[1]: libpod-048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426.scope: Deactivated successfully.
Dec 09 16:14:12 compute-0 systemd[1]: libpod-048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426.scope: Consumed 1.254s CPU time.
Dec 09 16:14:12 compute-0 podman[154430]: 2025-12-09 16:14:12.502322574 +0000 UTC m=+0.913610042 container died 048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hodgkin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ad9e6bbb679a5922aca76b6b460dd5db3e1cb4cbaccccae7b86f19b26f7ddb6-merged.mount: Deactivated successfully.
Dec 09 16:14:12 compute-0 podman[154430]: 2025-12-09 16:14:12.546757264 +0000 UTC m=+0.958044682 container remove 048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:14:12 compute-0 systemd[1]: libpod-conmon-048501ae37e9328fd1aa0a95fa7d1b50551b3d1fbd730cd31f6b9ab59f6ff426.scope: Deactivated successfully.
Dec 09 16:14:12 compute-0 sudo[154249]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:14:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:14:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:12 compute-0 sudo[154731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:14:12 compute-0 sudo[154731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:14:12 compute-0 sudo[154731]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:12 compute-0 sudo[154840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qllephywdzfxwleicrmgypphmvwsgycm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296852.4072204-446-148493112819106/AnsiballZ_copy.py'
Dec 09 16:14:12 compute-0 sudo[154840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:13 compute-0 python3.9[154842]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765296852.4072204-446-148493112819106/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:13 compute-0 sudo[154840]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:14:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5638 writes, 24K keys, 5638 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5638 writes, 906 syncs, 6.22 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5638 writes, 24K keys, 5638 commit groups, 1.0 writes per commit group, ingest: 18.89 MB, 0.03 MB/s
                                           Interval WAL: 5638 writes, 906 syncs, 6.22 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:14:13 compute-0 sudo[154916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-subhskunrkkqyqzgstrrjwgbowzofede ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296852.4072204-446-148493112819106/AnsiballZ_systemd.py'
Dec 09 16:14:13 compute-0 sudo[154916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:13 compute-0 ceph-mon[75222]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:14:13 compute-0 python3.9[154918]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 16:14:13 compute-0 systemd[1]: Reloading.
Dec 09 16:14:13 compute-0 systemd-rc-local-generator[154937]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:14:13 compute-0 systemd-sysv-generator[154940]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:14:13 compute-0 sudo[154916]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:14 compute-0 sudo[155027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmjzlwepkvdiotnygmdbtctnzjchoipv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296852.4072204-446-148493112819106/AnsiballZ_systemd.py'
Dec 09 16:14:14 compute-0 sudo[155027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:14 compute-0 python3.9[155029]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:14:14 compute-0 systemd[1]: Reloading.
Dec 09 16:14:14 compute-0 systemd-rc-local-generator[155059]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:14:14 compute-0 systemd-sysv-generator[155063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:14:14 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 09 16:14:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842b0bbc551dc8639734ff18d73578e37839cd9d0ce8e86f7ac3e0dac9f749c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842b0bbc551dc8639734ff18d73578e37839cd9d0ce8e86f7ac3e0dac9f749c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 09 16:14:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692.
Dec 09 16:14:15 compute-0 podman[155070]: 2025-12-09 16:14:15.102089688 +0000 UTC m=+0.152586511 container init fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + sudo -E kolla_set_configs
Dec 09 16:14:15 compute-0 podman[155070]: 2025-12-09 16:14:15.14207068 +0000 UTC m=+0.192567493 container start fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 09 16:14:15 compute-0 edpm-start-podman-container[155070]: ovn_metadata_agent
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Validating config file
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Copying service configuration files
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Writing out command to execute
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 09 16:14:15 compute-0 edpm-start-podman-container[155069]: Creating additional drop-in dependency for "ovn_metadata_agent" (fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692)
Dec 09 16:14:15 compute-0 podman[155093]: 2025-12-09 16:14:15.223345722 +0000 UTC m=+0.060841019 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: ++ cat /run_command
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + CMD=neutron-ovn-metadata-agent
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + ARGS=
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + sudo kolla_copy_cacerts
Dec 09 16:14:15 compute-0 systemd[1]: Reloading.
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + [[ ! -n '' ]]
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + . kolla_extend_start
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: Running command: 'neutron-ovn-metadata-agent'
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + umask 0022
Dec 09 16:14:15 compute-0 ovn_metadata_agent[155086]: + exec neutron-ovn-metadata-agent
Dec 09 16:14:15 compute-0 systemd-rc-local-generator[155163]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:14:15 compute-0 systemd-sysv-generator[155168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:14:15 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 09 16:14:15 compute-0 sudo[155027]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:15 compute-0 ceph-mon[75222]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:16 compute-0 sshd-session[146024]: Connection closed by 192.168.122.30 port 56210
Dec 09 16:14:16 compute-0 sshd-session[146021]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:14:16 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 09 16:14:16 compute-0 systemd[1]: session-48.scope: Consumed 53.720s CPU time.
Dec 09 16:14:16 compute-0 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Dec 09 16:14:16 compute-0 systemd-logind[786]: Removed session 48.
Dec 09 16:14:17 compute-0 ceph-mon[75222]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:14:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6907 writes, 28K keys, 6907 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6907 writes, 1321 syncs, 5.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6907 writes, 28K keys, 6907 commit groups, 1.0 writes per commit group, ingest: 19.91 MB, 0.03 MB/s
                                           Interval WAL: 6907 writes, 1321 syncs, 5.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.787 155091 INFO neutron.common.config [-] Logging enabled!
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.788 155091 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.788 155091 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.788 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.789 155091 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.789 155091 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.789 155091 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.789 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.789 155091 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.789 155091 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.790 155091 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.790 155091 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.790 155091 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.790 155091 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.790 155091 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.790 155091 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.790 155091 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.791 155091 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.791 155091 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.791 155091 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.791 155091 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.791 155091 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.791 155091 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.792 155091 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.792 155091 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.792 155091 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.792 155091 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.792 155091 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.792 155091 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.792 155091 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.793 155091 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.793 155091 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.793 155091 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.793 155091 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.793 155091 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.793 155091 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.794 155091 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.794 155091 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.794 155091 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.794 155091 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.794 155091 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.794 155091 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.795 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.795 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.795 155091 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.795 155091 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.795 155091 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.795 155091 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.795 155091 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.795 155091 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.796 155091 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.796 155091 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.796 155091 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.796 155091 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.796 155091 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.796 155091 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.796 155091 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.796 155091 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.797 155091 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.797 155091 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.797 155091 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.797 155091 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.797 155091 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.797 155091 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.797 155091 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.798 155091 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.798 155091 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.798 155091 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.798 155091 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.798 155091 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.798 155091 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.798 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.799 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.799 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.799 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.799 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.799 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.800 155091 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.800 155091 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.800 155091 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.800 155091 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.800 155091 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.800 155091 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.800 155091 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.801 155091 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.801 155091 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.801 155091 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.801 155091 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.801 155091 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.801 155091 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.801 155091 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.802 155091 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.802 155091 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.802 155091 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.802 155091 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.802 155091 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.802 155091 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.802 155091 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.803 155091 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.803 155091 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.803 155091 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.803 155091 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.803 155091 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.803 155091 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.803 155091 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.804 155091 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.804 155091 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.804 155091 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.804 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.804 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.804 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.805 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.805 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.805 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.805 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.805 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.805 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.806 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.806 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.806 155091 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.806 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.806 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.806 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.806 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.807 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.807 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.807 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.807 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.807 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.807 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.808 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.808 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.808 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.808 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.808 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.808 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.808 155091 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.809 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.809 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.809 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.809 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.809 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.809 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.809 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.810 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.811 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.812 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.813 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.814 155091 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.815 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.816 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.817 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.818 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.818 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.818 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.818 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.818 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.818 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.818 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.818 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.819 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.820 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.821 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.822 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.822 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.822 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.822 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.822 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.822 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.822 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.822 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.823 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.824 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.824 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.824 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.824 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.824 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.824 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.824 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.824 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.825 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.826 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.826 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.826 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.826 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.826 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.826 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.826 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.826 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.827 155091 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.837 155091 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.837 155091 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.837 155091 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.837 155091 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.838 155091 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.851 155091 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 037f0e18-4bfd-4487-a7a8-05ae973391a9 (UUID: 037f0e18-4bfd-4487-a7a8-05ae973391a9) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.884 155091 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.885 155091 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.885 155091 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.885 155091 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.888 155091 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.894 155091 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.900 155091 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '037f0e18-4bfd-4487-a7a8-05ae973391a9'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f7a90f82b80>], external_ids={}, name=037f0e18-4bfd-4487-a7a8-05ae973391a9, nb_cfg_timestamp=1765296798936, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.901 155091 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f7a90f04c10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.901 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.902 155091 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.902 155091 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.902 155091 INFO oslo_service.service [-] Starting 1 workers
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.906 155091 DEBUG oslo_service.service [-] Started child 155210 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.909 155210 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1998569'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.910 155091 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp2hzlk811/privsep.sock']
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.931 155210 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.932 155210 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.932 155210 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.935 155210 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.943 155210 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 09 16:14:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:17.952 155210 INFO eventlet.wsgi.server [-] (155210) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 09 16:14:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:18 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 09 16:14:18 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:18.573 155091 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 09 16:14:18 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:18.574 155091 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2hzlk811/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 09 16:14:18 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:18.456 155215 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 09 16:14:18 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:18.464 155215 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 09 16:14:18 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:18.468 155215 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 09 16:14:18 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:18.469 155215 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155215
Dec 09 16:14:18 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:18.576 155215 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6d0f48-a111-4a60-8490-58680240722e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.040 155215 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.040 155215 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.040 155215 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:14:19 compute-0 ceph-mon[75222]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.567 155215 DEBUG oslo.privsep.daemon [-] privsep: reply[90a1f899-29cc-413f-b827-12a9007717ae]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.570 155091 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=037f0e18-4bfd-4487-a7a8-05ae973391a9, column=external_ids, values=({'neutron:ovn-metadata-id': '3ba39c93-bae5-5c9f-a6a8-33e696ba0021'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.581 155091 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=037f0e18-4bfd-4487-a7a8-05ae973391a9, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.588 155091 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.588 155091 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.588 155091 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.588 155091 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.588 155091 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.589 155091 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.589 155091 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.589 155091 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.589 155091 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.589 155091 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.590 155091 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.590 155091 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.590 155091 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.590 155091 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.590 155091 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.591 155091 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.591 155091 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.591 155091 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.591 155091 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.591 155091 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.592 155091 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.592 155091 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.592 155091 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.592 155091 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.592 155091 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.593 155091 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.593 155091 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.593 155091 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.593 155091 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.593 155091 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.594 155091 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.594 155091 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.594 155091 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.594 155091 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.594 155091 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.595 155091 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.595 155091 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.595 155091 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.595 155091 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.595 155091 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.596 155091 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.596 155091 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.596 155091 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.596 155091 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.596 155091 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.597 155091 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.597 155091 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.597 155091 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.597 155091 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.597 155091 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.597 155091 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.598 155091 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.598 155091 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.598 155091 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.598 155091 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.598 155091 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.598 155091 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.599 155091 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.599 155091 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.599 155091 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.599 155091 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.599 155091 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.599 155091 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.599 155091 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.600 155091 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.600 155091 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.600 155091 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.600 155091 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.600 155091 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.600 155091 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.601 155091 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.601 155091 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.601 155091 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.601 155091 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.601 155091 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.601 155091 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.602 155091 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.602 155091 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.602 155091 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.602 155091 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.602 155091 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.602 155091 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.603 155091 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.603 155091 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.603 155091 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.603 155091 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.603 155091 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.604 155091 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.604 155091 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.604 155091 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.604 155091 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.604 155091 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.604 155091 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.605 155091 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.605 155091 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.605 155091 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.605 155091 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.605 155091 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.605 155091 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.606 155091 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.606 155091 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.606 155091 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.606 155091 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.606 155091 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.606 155091 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.606 155091 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.607 155091 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.607 155091 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.607 155091 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.607 155091 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.607 155091 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.608 155091 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.608 155091 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.608 155091 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.608 155091 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.608 155091 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.608 155091 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.608 155091 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.609 155091 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.609 155091 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.609 155091 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.609 155091 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.609 155091 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.609 155091 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.610 155091 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.610 155091 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.610 155091 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.610 155091 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.610 155091 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.610 155091 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.611 155091 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.611 155091 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.611 155091 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.611 155091 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.611 155091 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.611 155091 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.612 155091 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.612 155091 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.612 155091 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.612 155091 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.612 155091 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.612 155091 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.612 155091 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.613 155091 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.613 155091 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.613 155091 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.613 155091 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.613 155091 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.613 155091 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.613 155091 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.614 155091 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.614 155091 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.614 155091 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.614 155091 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.614 155091 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.614 155091 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.614 155091 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.615 155091 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.615 155091 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.615 155091 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.615 155091 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.615 155091 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.615 155091 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.615 155091 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.616 155091 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.616 155091 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.616 155091 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.616 155091 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.616 155091 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.616 155091 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.616 155091 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.617 155091 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.617 155091 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.617 155091 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.617 155091 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.617 155091 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.617 155091 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.618 155091 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.618 155091 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.618 155091 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.618 155091 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.618 155091 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.618 155091 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.619 155091 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.619 155091 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.619 155091 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.619 155091 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.619 155091 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.619 155091 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.620 155091 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.620 155091 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.620 155091 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.620 155091 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.620 155091 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.620 155091 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.620 155091 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.621 155091 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.621 155091 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.621 155091 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.621 155091 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.621 155091 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.622 155091 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.622 155091 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.623 155091 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.623 155091 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.624 155091 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.624 155091 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.624 155091 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.625 155091 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.625 155091 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.625 155091 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.625 155091 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.625 155091 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.626 155091 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.626 155091 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.626 155091 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.626 155091 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.627 155091 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.627 155091 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.627 155091 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.628 155091 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.628 155091 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.628 155091 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.629 155091 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.629 155091 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.629 155091 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.629 155091 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.629 155091 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.629 155091 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.630 155091 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.630 155091 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.630 155091 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.630 155091 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.630 155091 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.631 155091 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.631 155091 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.631 155091 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.631 155091 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.631 155091 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.632 155091 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.632 155091 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.632 155091 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.632 155091 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.632 155091 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.633 155091 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.633 155091 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.633 155091 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.633 155091 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.633 155091 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.634 155091 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.634 155091 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.634 155091 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.634 155091 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.634 155091 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.635 155091 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.635 155091 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.635 155091 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.635 155091 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.635 155091 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.636 155091 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.636 155091 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.636 155091 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.636 155091 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.636 155091 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.636 155091 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.636 155091 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.637 155091 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.637 155091 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.637 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.637 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.637 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.637 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.638 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.638 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.638 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.638 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.638 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.638 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.639 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.639 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.639 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.639 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.639 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.639 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.639 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.640 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.640 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.640 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.640 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.640 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.640 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.641 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.641 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.641 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.641 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.641 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.641 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.642 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.642 155091 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.642 155091 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.642 155091 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.642 155091 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.642 155091 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:14:19 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:14:19.643 155091 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 16:14:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:21 compute-0 ceph-mon[75222]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:14:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5500 writes, 23K keys, 5500 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5500 writes, 820 syncs, 6.71 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5500 writes, 23K keys, 5500 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s
                                           Interval WAL: 5500 writes, 820 syncs, 6.71 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:14:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:22 compute-0 sshd-session[155220]: Accepted publickey for zuul from 192.168.122.30 port 55954 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:14:22 compute-0 systemd-logind[786]: New session 49 of user zuul.
Dec 09 16:14:22 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 09 16:14:22 compute-0 sshd-session[155220]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:14:23 compute-0 ceph-mon[75222]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:23 compute-0 python3.9[155373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:14:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:24 compute-0 ceph-mgr[75515]: [devicehealth INFO root] Check health
Dec 09 16:14:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:24 compute-0 sudo[155527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bevkduoojeniamsnonfwkijkesanjvwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296863.9539292-34-217008677002275/AnsiballZ_command.py'
Dec 09 16:14:24 compute-0 sudo[155527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:24 compute-0 python3.9[155529]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:24 compute-0 sudo[155527]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:25 compute-0 ceph-mon[75222]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:25 compute-0 sudo[155692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwoejrnazyzibhfceziclbwmvbfdiuis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296864.9520013-45-210910277627171/AnsiballZ_systemd_service.py'
Dec 09 16:14:25 compute-0 sudo[155692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:25 compute-0 python3.9[155694]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 16:14:25 compute-0 systemd[1]: Reloading.
Dec 09 16:14:25 compute-0 systemd-rc-local-generator[155723]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:14:25 compute-0 systemd-sysv-generator[155726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:14:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:14:25
Dec 09 16:14:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:14:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:14:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.log', 'backups', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes']
Dec 09 16:14:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:14:26 compute-0 sudo[155692]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:14:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:14:27 compute-0 python3.9[155880]: ansible-ansible.builtin.service_facts Invoked
Dec 09 16:14:27 compute-0 network[155897]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 16:14:27 compute-0 network[155898]: 'network-scripts' will be removed from distribution in near future.
Dec 09 16:14:27 compute-0 network[155899]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 16:14:27 compute-0 ceph-mon[75222]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:29 compute-0 ceph-mon[75222]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:30 compute-0 sudo[156159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkfltgfuthyvtlxdnuegthooicykpfov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296870.0822768-64-205992171638264/AnsiballZ_systemd_service.py'
Dec 09 16:14:30 compute-0 sudo[156159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:30 compute-0 python3.9[156161]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:14:30 compute-0 sudo[156159]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:31 compute-0 sudo[156312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmhlkukviovywpushyhmqvuxjcnaepiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296870.9026065-64-276465034148893/AnsiballZ_systemd_service.py'
Dec 09 16:14:31 compute-0 sudo[156312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:31 compute-0 ceph-mon[75222]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:31 compute-0 python3.9[156314]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:14:31 compute-0 sudo[156312]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:31 compute-0 sudo[156465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nckjogteqllydpnffumejuhxtyxywczs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296871.6774764-64-237727717170372/AnsiballZ_systemd_service.py'
Dec 09 16:14:31 compute-0 sudo[156465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:32 compute-0 python3.9[156467]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:14:32 compute-0 sudo[156465]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:32 compute-0 sudo[156618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjxikhgmwdrnvtskoxiivxzuqxqtacbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296872.454414-64-172278973141849/AnsiballZ_systemd_service.py'
Dec 09 16:14:32 compute-0 sudo[156618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:33 compute-0 python3.9[156620]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:14:33 compute-0 sudo[156618]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:33 compute-0 ceph-mon[75222]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:33 compute-0 sshd-session[156646]: Invalid user dspace from 146.190.31.45 port 33060
Dec 09 16:14:33 compute-0 sudo[156773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjbjonuzopoyggilgxsifrljidpwgmie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296873.3735294-64-169750753272222/AnsiballZ_systemd_service.py'
Dec 09 16:14:33 compute-0 sudo[156773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:33 compute-0 sshd-session[156646]: Connection closed by invalid user dspace 146.190.31.45 port 33060 [preauth]
Dec 09 16:14:34 compute-0 python3.9[156775]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:14:34 compute-0 sudo[156773]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:34 compute-0 sudo[156926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgvzeckrgmdctlywhddlefzyieolrwok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296874.3465886-64-161407292260043/AnsiballZ_systemd_service.py'
Dec 09 16:14:34 compute-0 sudo[156926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:34 compute-0 python3.9[156928]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:14:35 compute-0 sudo[156926]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:35 compute-0 ceph-mon[75222]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:35 compute-0 sudo[157079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bifumineiysrhkxrfuuemhjzadcgyvqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296875.1845512-64-236185102804157/AnsiballZ_systemd_service.py'
Dec 09 16:14:35 compute-0 sudo[157079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:35 compute-0 python3.9[157081]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:14:35 compute-0 sudo[157079]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:14:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:14:36 compute-0 sudo[157232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flifxbbgtoeobbmgmnizdujgzhyncckf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296876.2201803-116-40216606777857/AnsiballZ_file.py'
Dec 09 16:14:36 compute-0 sudo[157232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:36 compute-0 python3.9[157234]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:36 compute-0 sudo[157232]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:37 compute-0 sudo[157384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyckguwufipyyulbjgwckxdkxieqxmce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296877.0839505-116-14931349141997/AnsiballZ_file.py'
Dec 09 16:14:37 compute-0 sudo[157384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:37 compute-0 ceph-mon[75222]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:37 compute-0 python3.9[157386]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:37 compute-0 sudo[157384]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:38 compute-0 sudo[157536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obuninvvabwbakfrkhnhbthslrkrbxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296877.773279-116-119180452158980/AnsiballZ_file.py'
Dec 09 16:14:38 compute-0 sudo[157536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:38 compute-0 python3.9[157538]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:38 compute-0 sudo[157536]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:38 compute-0 sudo[157688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwpygcgtonavpqikkgapgkvmpiohofrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296878.4623516-116-88613409833865/AnsiballZ_file.py'
Dec 09 16:14:38 compute-0 sudo[157688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:38 compute-0 python3.9[157690]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:38 compute-0 sudo[157688]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:39 compute-0 sudo[157840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvwfzjhlseogdwiagetznxjszacasuxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296879.1073277-116-262175804312179/AnsiballZ_file.py'
Dec 09 16:14:39 compute-0 sudo[157840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:39 compute-0 ceph-mon[75222]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:39 compute-0 python3.9[157842]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:39 compute-0 sudo[157840]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:40 compute-0 sudo[157992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mblymvvvsgcqfkemzgffgkzqhqxylwog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296879.7671723-116-213850647554835/AnsiballZ_file.py'
Dec 09 16:14:40 compute-0 sudo[157992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:40 compute-0 python3.9[157994]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:40 compute-0 sudo[157992]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:40 compute-0 sudo[158144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlmxirzqotctqwgbzdjffheffqnmduig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296880.4542053-116-75246580861924/AnsiballZ_file.py'
Dec 09 16:14:40 compute-0 sudo[158144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:40 compute-0 python3.9[158146]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:40 compute-0 sudo[158144]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:41 compute-0 sudo[158306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyqxexsdfnqbwayautnrghocxcbjycmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296881.2199628-166-197729839755281/AnsiballZ_file.py'
Dec 09 16:14:41 compute-0 ceph-mon[75222]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:41 compute-0 sudo[158306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:41 compute-0 podman[158270]: 2025-12-09 16:14:41.574538895 +0000 UTC m=+0.108390847 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 16:14:41 compute-0 python3.9[158315]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:41 compute-0 sudo[158306]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:42 compute-0 sudo[158474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytasvyftbzfyjwjqfckfyeansimpekh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296881.8634286-166-96084380135772/AnsiballZ_file.py'
Dec 09 16:14:42 compute-0 sudo[158474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:42 compute-0 python3.9[158476]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:42 compute-0 sudo[158474]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:42 compute-0 sudo[158626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnewemqqrcntoqwcebyabdznpwmfqqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296882.547814-166-21474761029317/AnsiballZ_file.py'
Dec 09 16:14:42 compute-0 sudo[158626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:43 compute-0 python3.9[158628]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:43 compute-0 sudo[158626]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:43 compute-0 ceph-mon[75222]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:43 compute-0 sudo[158778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqkwuqiishfykybfwddmpmgfiihwoytk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296883.3398423-166-148121853861478/AnsiballZ_file.py'
Dec 09 16:14:43 compute-0 sudo[158778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:43 compute-0 python3.9[158780]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:43 compute-0 sudo[158778]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:44 compute-0 sudo[158930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cusytrygsaqsbovplvuposuqtlkqauwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296883.9238477-166-202770658807569/AnsiballZ_file.py'
Dec 09 16:14:44 compute-0 sudo[158930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:44 compute-0 python3.9[158932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:44 compute-0 sudo[158930]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:44 compute-0 sudo[159082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkdjbkscjxjnitvjnsgopakbkxotjvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296884.5825284-166-15037689746992/AnsiballZ_file.py'
Dec 09 16:14:44 compute-0 sudo[159082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:45 compute-0 python3.9[159084]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:45 compute-0 sudo[159082]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:45 compute-0 sudo[159251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeymqcopahpufklncibdlbmcpjvzpygq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296885.1797733-166-90239374456915/AnsiballZ_file.py'
Dec 09 16:14:45 compute-0 podman[159208]: 2025-12-09 16:14:45.467900125 +0000 UTC m=+0.049266209 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 16:14:45 compute-0 sudo[159251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:45 compute-0 ceph-mon[75222]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:45 compute-0 python3.9[159255]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:14:45 compute-0 sudo[159251]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:46 compute-0 sudo[159405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsirhueeemcvdhmiklfggflflizuuahe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296885.9101481-217-41487185675229/AnsiballZ_command.py'
Dec 09 16:14:46 compute-0 sudo[159405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:46 compute-0 python3.9[159407]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:46 compute-0 sudo[159405]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:47 compute-0 python3.9[159559]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 16:14:47 compute-0 ceph-mon[75222]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:47 compute-0 sudo[159709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcmsngqkwgcoxbhdlokddwolgmfcqdqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296887.4811296-235-77545520031171/AnsiballZ_systemd_service.py'
Dec 09 16:14:47 compute-0 sudo[159709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:48 compute-0 python3.9[159711]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 16:14:48 compute-0 systemd[1]: Reloading.
Dec 09 16:14:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:48 compute-0 systemd-rc-local-generator[159738]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:14:48 compute-0 systemd-sysv-generator[159743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:14:48 compute-0 sudo[159709]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:48 compute-0 sudo[159897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibqkehmxdaoappbdbwkfmvyhajzqzhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296888.570188-243-137733748771099/AnsiballZ_command.py'
Dec 09 16:14:48 compute-0 sudo[159897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:49 compute-0 python3.9[159899]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:49 compute-0 sudo[159897]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:49 compute-0 sudo[160050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnehckquexzwrcdpsoofjmlchhuuhulc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296889.1792905-243-60234366370062/AnsiballZ_command.py'
Dec 09 16:14:49 compute-0 sudo[160050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:49 compute-0 ceph-mon[75222]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:49 compute-0 python3.9[160052]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:49 compute-0 sudo[160050]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:50 compute-0 sudo[160203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjlkgbgabciybbulvyhiyscyosopjmql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296889.7846153-243-219293153194170/AnsiballZ_command.py'
Dec 09 16:14:50 compute-0 sudo[160203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:50 compute-0 python3.9[160205]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:50 compute-0 sudo[160203]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:50 compute-0 sudo[160356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlivqesqetyibiuhljkipkokgpnvwuon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296890.3999298-243-78730353779319/AnsiballZ_command.py'
Dec 09 16:14:50 compute-0 sudo[160356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:50 compute-0 python3.9[160358]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:50 compute-0 sudo[160356]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:51 compute-0 sudo[160509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzmptlecnoamzyotummkmmanhzbrvanm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296891.0589554-243-109847743090698/AnsiballZ_command.py'
Dec 09 16:14:51 compute-0 sudo[160509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:51 compute-0 ceph-mon[75222]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:51 compute-0 python3.9[160511]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:51 compute-0 sudo[160509]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:52 compute-0 sudo[160662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjnawhkouqsdmnstxvdwtzbbdfppqls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296891.772295-243-172361461705922/AnsiballZ_command.py'
Dec 09 16:14:52 compute-0 sudo[160662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:52 compute-0 python3.9[160664]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:52 compute-0 sudo[160662]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:52 compute-0 sudo[160815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lksozexuakytaedfcportywetzituytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296892.3809745-243-144374344297244/AnsiballZ_command.py'
Dec 09 16:14:52 compute-0 sudo[160815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:52 compute-0 python3.9[160817]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:14:52 compute-0 sudo[160815]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:53 compute-0 ceph-mon[75222]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:53 compute-0 sudo[160968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzetorcklemijfapierjudojhurksues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296893.252312-297-27471577188610/AnsiballZ_getent.py'
Dec 09 16:14:53 compute-0 sudo[160968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:53 compute-0 python3.9[160970]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 09 16:14:53 compute-0 sudo[160968]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:54 compute-0 sudo[161121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydmtikbhxqgeqavxdbmgdnnvkzucfkkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296894.12106-305-6205284553317/AnsiballZ_group.py'
Dec 09 16:14:54 compute-0 sudo[161121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:54 compute-0 python3.9[161123]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 16:14:54 compute-0 groupadd[161124]: group added to /etc/group: name=libvirt, GID=42473
Dec 09 16:14:54 compute-0 groupadd[161124]: group added to /etc/gshadow: name=libvirt
Dec 09 16:14:54 compute-0 groupadd[161124]: new group: name=libvirt, GID=42473
Dec 09 16:14:54 compute-0 sudo[161121]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:55 compute-0 ceph-mon[75222]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:55 compute-0 sudo[161279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtorvzvzyxkwvpyscyxrjdgkfrhkyghq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296895.0138748-313-176879432694062/AnsiballZ_user.py'
Dec 09 16:14:55 compute-0 sudo[161279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:55 compute-0 python3.9[161281]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 16:14:55 compute-0 useradd[161283]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 16:14:55 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:14:55 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:14:55 compute-0 sudo[161279]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:14:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:14:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:14:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:14:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:14:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:14:56 compute-0 sudo[161440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkefztcyuidvnfrjbarioinjxjiydokb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296896.3038125-324-52227687473460/AnsiballZ_setup.py'
Dec 09 16:14:56 compute-0 sudo[161440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:56 compute-0 python3.9[161442]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:14:57 compute-0 sudo[161440]: pam_unix(sudo:session): session closed for user root
Dec 09 16:14:57 compute-0 ceph-mon[75222]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:57 compute-0 sudo[161524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlbuxwbtjqmicnvghulzkmtkyzsateyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765296896.3038125-324-52227687473460/AnsiballZ_dnf.py'
Dec 09 16:14:57 compute-0 sudo[161524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:14:57 compute-0 python3.9[161526]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:14:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:14:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:14:59 compute-0 ceph-mon[75222]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:01 compute-0 ceph-mon[75222]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:02 compute-0 ceph-mon[75222]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:05 compute-0 ceph-mon[75222]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:07 compute-0 ceph-mon[75222]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:09 compute-0 ceph-mon[75222]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:11 compute-0 ceph-mon[75222]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:12 compute-0 podman[161713]: 2025-12-09 16:15:12.672704357 +0000 UTC m=+0.114604395 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 16:15:12 compute-0 sudo[161738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:15:12 compute-0 sudo[161738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:12 compute-0 sudo[161738]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:12 compute-0 sudo[161763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:15:12 compute-0 sudo[161763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:13 compute-0 sudo[161763]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 09 16:15:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:15:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:15:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:15:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:15:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:15:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:15:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:15:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:15:13 compute-0 sudo[161817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:15:13 compute-0 sudo[161817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:13 compute-0 sudo[161817]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:13 compute-0 sudo[161842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:15:13 compute-0 sudo[161842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:13 compute-0 ceph-mon[75222]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:15:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:15:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:15:13 compute-0 podman[161879]: 2025-12-09 16:15:13.805555688 +0000 UTC m=+0.053474883 container create a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:15:13 compute-0 systemd[1]: Started libpod-conmon-a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7.scope.
Dec 09 16:15:13 compute-0 podman[161879]: 2025-12-09 16:15:13.779561364 +0000 UTC m=+0.027480659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:15:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:15:13 compute-0 podman[161879]: 2025-12-09 16:15:13.890563833 +0000 UTC m=+0.138483078 container init a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:15:13 compute-0 podman[161879]: 2025-12-09 16:15:13.898847303 +0000 UTC m=+0.146766508 container start a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:15:13 compute-0 podman[161879]: 2025-12-09 16:15:13.902047235 +0000 UTC m=+0.149966450 container attach a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:15:13 compute-0 blissful_lovelace[161895]: 167 167
Dec 09 16:15:13 compute-0 podman[161879]: 2025-12-09 16:15:13.904347872 +0000 UTC m=+0.152267067 container died a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:15:13 compute-0 systemd[1]: libpod-a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7.scope: Deactivated successfully.
Dec 09 16:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-db96f61fc6183b4b113227f3dc492fd65880c17b800ec516c2cb00224508090a-merged.mount: Deactivated successfully.
Dec 09 16:15:13 compute-0 podman[161879]: 2025-12-09 16:15:13.955134255 +0000 UTC m=+0.203053460 container remove a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:15:13 compute-0 systemd[1]: libpod-conmon-a101a868e661161b816d6c551695d83702f51f0ad340486b312d0b51b1c891d7.scope: Deactivated successfully.
Dec 09 16:15:14 compute-0 podman[161921]: 2025-12-09 16:15:14.124314703 +0000 UTC m=+0.045208913 container create 2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_golick, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:15:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:14 compute-0 systemd[1]: Started libpod-conmon-2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58.scope.
Dec 09 16:15:14 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ab904cd4ed5205e4da1c750a743939cbc9d71c581832b18432e7ebff74d9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ab904cd4ed5205e4da1c750a743939cbc9d71c581832b18432e7ebff74d9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ab904cd4ed5205e4da1c750a743939cbc9d71c581832b18432e7ebff74d9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ab904cd4ed5205e4da1c750a743939cbc9d71c581832b18432e7ebff74d9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ab904cd4ed5205e4da1c750a743939cbc9d71c581832b18432e7ebff74d9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:14 compute-0 podman[161921]: 2025-12-09 16:15:14.105969481 +0000 UTC m=+0.026863731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:15:14 compute-0 podman[161921]: 2025-12-09 16:15:14.206168217 +0000 UTC m=+0.127062507 container init 2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:15:14 compute-0 podman[161921]: 2025-12-09 16:15:14.217854656 +0000 UTC m=+0.138748886 container start 2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:15:14 compute-0 podman[161921]: 2025-12-09 16:15:14.220775911 +0000 UTC m=+0.141670131 container attach 2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:15:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:14 compute-0 distracted_golick[161936]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:15:14 compute-0 distracted_golick[161936]: --> All data devices are unavailable
Dec 09 16:15:14 compute-0 systemd[1]: libpod-2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58.scope: Deactivated successfully.
Dec 09 16:15:14 compute-0 conmon[161936]: conmon 2387defb806214938ef1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58.scope/container/memory.events
Dec 09 16:15:14 compute-0 podman[161921]: 2025-12-09 16:15:14.753323218 +0000 UTC m=+0.674217438 container died 2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-417ab904cd4ed5205e4da1c750a743939cbc9d71c581832b18432e7ebff74d9a-merged.mount: Deactivated successfully.
Dec 09 16:15:14 compute-0 podman[161921]: 2025-12-09 16:15:14.791026252 +0000 UTC m=+0.711920482 container remove 2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:15:14 compute-0 systemd[1]: libpod-conmon-2387defb806214938ef1acf92fd37214e8470a6e235d733e930d3975a48c1a58.scope: Deactivated successfully.
Dec 09 16:15:14 compute-0 sudo[161842]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:14 compute-0 sudo[161969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:15:14 compute-0 sudo[161969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:14 compute-0 sudo[161969]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:14 compute-0 sudo[161994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:15:14 compute-0 sudo[161994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:15 compute-0 podman[162031]: 2025-12-09 16:15:15.300007636 +0000 UTC m=+0.049194458 container create 3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:15:15 compute-0 systemd[1]: Started libpod-conmon-3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0.scope.
Dec 09 16:15:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:15:15 compute-0 podman[162031]: 2025-12-09 16:15:15.282139637 +0000 UTC m=+0.031326459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:15:15 compute-0 podman[162031]: 2025-12-09 16:15:15.392140258 +0000 UTC m=+0.141327080 container init 3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 09 16:15:15 compute-0 podman[162031]: 2025-12-09 16:15:15.402508599 +0000 UTC m=+0.151695451 container start 3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_davinci, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 16:15:15 compute-0 podman[162031]: 2025-12-09 16:15:15.406297759 +0000 UTC m=+0.155484611 container attach 3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_davinci, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 09 16:15:15 compute-0 intelligent_davinci[162046]: 167 167
Dec 09 16:15:15 compute-0 systemd[1]: libpod-3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0.scope: Deactivated successfully.
Dec 09 16:15:15 compute-0 podman[162031]: 2025-12-09 16:15:15.411572732 +0000 UTC m=+0.160759614 container died 3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_davinci, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:15:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-aecbf1d5e0334591ec814918102233c4e4167f94dd61e494eac0448b796acc2a-merged.mount: Deactivated successfully.
Dec 09 16:15:15 compute-0 podman[162031]: 2025-12-09 16:15:15.458191334 +0000 UTC m=+0.207378146 container remove 3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:15:15 compute-0 systemd[1]: libpod-conmon-3742dc99522aece2b31195d00f67c889d8b434366e1b038757ee9dda590b98f0.scope: Deactivated successfully.
Dec 09 16:15:15 compute-0 ceph-mon[75222]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:15 compute-0 podman[162066]: 2025-12-09 16:15:15.598758741 +0000 UTC m=+0.095898972 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:15:15 compute-0 podman[162087]: 2025-12-09 16:15:15.635505487 +0000 UTC m=+0.049152886 container create aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:15:15 compute-0 systemd[1]: Started libpod-conmon-aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed.scope.
Dec 09 16:15:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c6e1da20f1772b2ad3e3c54f90015808e367a48c506ed181685c2f0e19f6ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c6e1da20f1772b2ad3e3c54f90015808e367a48c506ed181685c2f0e19f6ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c6e1da20f1772b2ad3e3c54f90015808e367a48c506ed181685c2f0e19f6ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c6e1da20f1772b2ad3e3c54f90015808e367a48c506ed181685c2f0e19f6ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:15 compute-0 podman[162087]: 2025-12-09 16:15:15.613281653 +0000 UTC m=+0.026929082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:15:15 compute-0 podman[162087]: 2025-12-09 16:15:15.720004848 +0000 UTC m=+0.133652277 container init aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:15:15 compute-0 podman[162087]: 2025-12-09 16:15:15.727398523 +0000 UTC m=+0.141045932 container start aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:15:15 compute-0 podman[162087]: 2025-12-09 16:15:15.732087559 +0000 UTC m=+0.145734958 container attach aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:15:16 compute-0 magical_kalam[162103]: {
Dec 09 16:15:16 compute-0 magical_kalam[162103]:     "0": [
Dec 09 16:15:16 compute-0 magical_kalam[162103]:         {
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "devices": [
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "/dev/loop3"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             ],
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_name": "ceph_lv0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_size": "21470642176",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "name": "ceph_lv0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "tags": {
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cluster_name": "ceph",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.crush_device_class": "",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.encrypted": "0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.objectstore": "bluestore",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osd_id": "0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.type": "block",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.vdo": "0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.with_tpm": "0"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             },
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "type": "block",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "vg_name": "ceph_vg0"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:         }
Dec 09 16:15:16 compute-0 magical_kalam[162103]:     ],
Dec 09 16:15:16 compute-0 magical_kalam[162103]:     "1": [
Dec 09 16:15:16 compute-0 magical_kalam[162103]:         {
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "devices": [
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "/dev/loop4"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             ],
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_name": "ceph_lv1",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_size": "21470642176",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "name": "ceph_lv1",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "tags": {
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cluster_name": "ceph",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.crush_device_class": "",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.encrypted": "0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.objectstore": "bluestore",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osd_id": "1",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.type": "block",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.vdo": "0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.with_tpm": "0"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             },
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "type": "block",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "vg_name": "ceph_vg1"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:         }
Dec 09 16:15:16 compute-0 magical_kalam[162103]:     ],
Dec 09 16:15:16 compute-0 magical_kalam[162103]:     "2": [
Dec 09 16:15:16 compute-0 magical_kalam[162103]:         {
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "devices": [
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "/dev/loop5"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             ],
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_name": "ceph_lv2",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_size": "21470642176",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "name": "ceph_lv2",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "tags": {
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.cluster_name": "ceph",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.crush_device_class": "",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.encrypted": "0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.objectstore": "bluestore",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osd_id": "2",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.type": "block",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.vdo": "0",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:                 "ceph.with_tpm": "0"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             },
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "type": "block",
Dec 09 16:15:16 compute-0 magical_kalam[162103]:             "vg_name": "ceph_vg2"
Dec 09 16:15:16 compute-0 magical_kalam[162103]:         }
Dec 09 16:15:16 compute-0 magical_kalam[162103]:     ]
Dec 09 16:15:16 compute-0 magical_kalam[162103]: }
Dec 09 16:15:16 compute-0 systemd[1]: libpod-aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed.scope: Deactivated successfully.
Dec 09 16:15:16 compute-0 podman[162112]: 2025-12-09 16:15:16.129095945 +0000 UTC m=+0.033280727 container died aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:15:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-72c6e1da20f1772b2ad3e3c54f90015808e367a48c506ed181685c2f0e19f6ca-merged.mount: Deactivated successfully.
Dec 09 16:15:16 compute-0 podman[162112]: 2025-12-09 16:15:16.170865596 +0000 UTC m=+0.075050278 container remove aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:15:16 compute-0 systemd[1]: libpod-conmon-aee09f4dc80dfd401c4a2000114df2e085f22256d42c9f0e55e8856c397b2bed.scope: Deactivated successfully.
Dec 09 16:15:16 compute-0 sudo[161994]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:16 compute-0 sudo[162127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:15:16 compute-0 sudo[162127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:16 compute-0 sudo[162127]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:16 compute-0 sudo[162152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:15:16 compute-0 sudo[162152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:16 compute-0 podman[162189]: 2025-12-09 16:15:16.695513285 +0000 UTC m=+0.048892380 container create 49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:15:16 compute-0 systemd[1]: Started libpod-conmon-49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784.scope.
Dec 09 16:15:16 compute-0 podman[162189]: 2025-12-09 16:15:16.673448045 +0000 UTC m=+0.026827230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:15:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:15:16 compute-0 podman[162189]: 2025-12-09 16:15:16.803568639 +0000 UTC m=+0.156947774 container init 49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_engelbart, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:15:16 compute-0 podman[162189]: 2025-12-09 16:15:16.813981241 +0000 UTC m=+0.167360376 container start 49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:15:16 compute-0 podman[162189]: 2025-12-09 16:15:16.818000618 +0000 UTC m=+0.171379733 container attach 49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 16:15:16 compute-0 elegant_engelbart[162205]: 167 167
Dec 09 16:15:16 compute-0 systemd[1]: libpod-49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784.scope: Deactivated successfully.
Dec 09 16:15:16 compute-0 podman[162189]: 2025-12-09 16:15:16.820990384 +0000 UTC m=+0.174369569 container died 49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-629d9e1215e0cd307d76b61abc0998b1211de41ca1db1ae215b93141edc3ab49-merged.mount: Deactivated successfully.
Dec 09 16:15:16 compute-0 podman[162189]: 2025-12-09 16:15:16.87567422 +0000 UTC m=+0.229053325 container remove 49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 09 16:15:16 compute-0 systemd[1]: libpod-conmon-49f53889642e969a651ba58b817b890ceea15e765ed37cad00f310d0f070f784.scope: Deactivated successfully.
Dec 09 16:15:17 compute-0 podman[162231]: 2025-12-09 16:15:17.070381618 +0000 UTC m=+0.035538202 container create c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lamarr, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:15:17 compute-0 systemd[1]: Started libpod-conmon-c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045.scope.
Dec 09 16:15:17 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2fccb9d69ebb043b63f5b0cc352ae9943837371a9391f77aaf2c1e35fcc65e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2fccb9d69ebb043b63f5b0cc352ae9943837371a9391f77aaf2c1e35fcc65e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2fccb9d69ebb043b63f5b0cc352ae9943837371a9391f77aaf2c1e35fcc65e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2fccb9d69ebb043b63f5b0cc352ae9943837371a9391f77aaf2c1e35fcc65e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:15:17 compute-0 podman[162231]: 2025-12-09 16:15:17.146178207 +0000 UTC m=+0.111334821 container init c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:15:17 compute-0 podman[162231]: 2025-12-09 16:15:17.056004951 +0000 UTC m=+0.021161555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:15:17 compute-0 podman[162231]: 2025-12-09 16:15:17.152685726 +0000 UTC m=+0.117842310 container start c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:15:17 compute-0 podman[162231]: 2025-12-09 16:15:17.163118678 +0000 UTC m=+0.128275272 container attach c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:15:17 compute-0 ceph-mon[75222]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:17 compute-0 lvm[162327]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:15:17 compute-0 lvm[162327]: VG ceph_vg1 finished
Dec 09 16:15:17 compute-0 lvm[162326]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:15:17 compute-0 lvm[162326]: VG ceph_vg0 finished
Dec 09 16:15:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:15:17.830 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:15:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:15:17.831 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:15:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:15:17.831 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:15:17 compute-0 lvm[162329]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:15:17 compute-0 lvm[162329]: VG ceph_vg2 finished
Dec 09 16:15:17 compute-0 flamboyant_lamarr[162245]: {}
Dec 09 16:15:17 compute-0 systemd[1]: libpod-c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045.scope: Deactivated successfully.
Dec 09 16:15:17 compute-0 systemd[1]: libpod-c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045.scope: Consumed 1.254s CPU time.
Dec 09 16:15:17 compute-0 podman[162231]: 2025-12-09 16:15:17.963045531 +0000 UTC m=+0.928202115 container died c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f2fccb9d69ebb043b63f5b0cc352ae9943837371a9391f77aaf2c1e35fcc65e-merged.mount: Deactivated successfully.
Dec 09 16:15:18 compute-0 podman[162231]: 2025-12-09 16:15:18.013766512 +0000 UTC m=+0.978923096 container remove c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:15:18 compute-0 systemd[1]: libpod-conmon-c6c90bf3f6ba46e74c6b74f2faa6e0b2ee3222633905e3d1bdad0e045fa09045.scope: Deactivated successfully.
Dec 09 16:15:18 compute-0 sudo[162152]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:15:18 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:15:18 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:15:18 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:15:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:18 compute-0 sudo[162344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:15:18 compute-0 sudo[162344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:15:18 compute-0 sudo[162344]: pam_unix(sudo:session): session closed for user root
Dec 09 16:15:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:15:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:15:19 compute-0 ceph-mon[75222]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:19 compute-0 sshd-session[162370]: Invalid user odoo from 146.190.31.45 port 57750
Dec 09 16:15:19 compute-0 sshd-session[162370]: Connection closed by invalid user odoo 146.190.31.45 port 57750 [preauth]
Dec 09 16:15:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:21 compute-0 ceph-mon[75222]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:23 compute-0 ceph-mon[75222]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:25 compute-0 ceph-mon[75222]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:15:25
Dec 09 16:15:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:15:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:15:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'images', 'default.rgw.log', 'vms']
Dec 09 16:15:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:15:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:15:27 compute-0 ceph-mon[75222]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:27 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Dec 09 16:15:27 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 16:15:27 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 16:15:27 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 16:15:27 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 16:15:27 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 16:15:27 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 16:15:27 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 16:15:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:29 compute-0 ceph-mon[75222]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:31 compute-0 ceph-mon[75222]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:15:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:33 compute-0 ceph-mon[75222]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:35 compute-0 ceph-mon[75222]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:15:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:15:37 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Dec 09 16:15:37 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 16:15:37 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 16:15:37 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 16:15:37 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 16:15:37 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 16:15:37 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 16:15:37 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 16:15:37 compute-0 ceph-mon[75222]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:39 compute-0 ceph-mon[75222]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:41 compute-0 ceph-mon[75222]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:43 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 09 16:15:43 compute-0 ceph-mon[75222]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:43 compute-0 podman[162388]: 2025-12-09 16:15:43.949922597 +0000 UTC m=+0.379642784 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 16:15:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:44 compute-0 ceph-mon[75222]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:46 compute-0 podman[162414]: 2025-12-09 16:15:46.594468965 +0000 UTC m=+0.043521373 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:15:47 compute-0 ceph-mon[75222]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:49 compute-0 ceph-mon[75222]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:51 compute-0 ceph-mon[75222]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:53 compute-0 ceph-mon[75222]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:15:55 compute-0 ceph-mon[75222]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:15:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:15:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:15:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:15:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:15:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:15:57 compute-0 ceph-mon[75222]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:59 compute-0 ceph-mon[75222]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:15:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:01 compute-0 ceph-mon[75222]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:03 compute-0 ceph-mon[75222]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:04 compute-0 sshd-session[172326]: Invalid user odoo from 146.190.31.45 port 47926
Dec 09 16:16:04 compute-0 sshd-session[172326]: Connection closed by invalid user odoo 146.190.31.45 port 47926 [preauth]
Dec 09 16:16:05 compute-0 ceph-mon[75222]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:07 compute-0 ceph-mon[75222]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:08 compute-0 ceph-mon[75222]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:11 compute-0 ceph-mon[75222]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:13 compute-0 ceph-mon[75222]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:14 compute-0 podman[179223]: 2025-12-09 16:16:14.649706792 +0000 UTC m=+0.099866466 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 09 16:16:15 compute-0 ceph-mon[75222]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:17 compute-0 ceph-mon[75222]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:17 compute-0 podman[179264]: 2025-12-09 16:16:17.605432711 +0000 UTC m=+0.055378565 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 09 16:16:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:16:17.832 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:16:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:16:17.832 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:16:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:16:17.832 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:16:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:18 compute-0 sudo[179284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:16:18 compute-0 sudo[179284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:18 compute-0 sudo[179284]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:18 compute-0 sudo[179309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:16:18 compute-0 sudo[179309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:19 compute-0 podman[179379]: 2025-12-09 16:16:19.018311906 +0000 UTC m=+0.421139024 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:16:19 compute-0 ceph-mon[75222]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:19 compute-0 podman[179379]: 2025-12-09 16:16:19.126592031 +0000 UTC m=+0.529419119 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 09 16:16:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:19 compute-0 sudo[179309]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:16:19 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:16:19 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:19 compute-0 sudo[179568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:16:19 compute-0 sudo[179568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:19 compute-0 sudo[179568]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:19 compute-0 sudo[179593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:16:19 compute-0 sudo[179593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:20 compute-0 sudo[179593]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:16:20 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:16:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:16:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:16:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:16:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:16:20 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:16:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:16:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:16:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:16:20 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:16:20 compute-0 sudo[179650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:16:20 compute-0 sudo[179650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:20 compute-0 sudo[179650]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:20 compute-0 sudo[179675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:16:20 compute-0 sudo[179675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:20 compute-0 ceph-mon[75222]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:16:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:16:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:16:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:16:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:16:20 compute-0 podman[179712]: 2025-12-09 16:16:20.867429164 +0000 UTC m=+0.022048216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:16:20 compute-0 podman[179712]: 2025-12-09 16:16:20.98991902 +0000 UTC m=+0.144538042 container create 0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:16:21 compute-0 systemd[1]: Started libpod-conmon-0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019.scope.
Dec 09 16:16:21 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:16:21 compute-0 podman[179712]: 2025-12-09 16:16:21.153890968 +0000 UTC m=+0.308510010 container init 0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_turing, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:16:21 compute-0 podman[179712]: 2025-12-09 16:16:21.16507038 +0000 UTC m=+0.319689412 container start 0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_turing, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:16:21 compute-0 happy_turing[179728]: 167 167
Dec 09 16:16:21 compute-0 systemd[1]: libpod-0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019.scope: Deactivated successfully.
Dec 09 16:16:21 compute-0 podman[179712]: 2025-12-09 16:16:21.299384938 +0000 UTC m=+0.454003980 container attach 0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:16:21 compute-0 podman[179712]: 2025-12-09 16:16:21.300725077 +0000 UTC m=+0.455344099 container died 0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_turing, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:16:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a69fce2721ed75b765b487bbaa64aa6e6cf346ce5abc40c993ee899b8e019e9-merged.mount: Deactivated successfully.
Dec 09 16:16:21 compute-0 podman[179712]: 2025-12-09 16:16:21.572238598 +0000 UTC m=+0.726857620 container remove 0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 09 16:16:21 compute-0 systemd[1]: libpod-conmon-0c7b5e0044fa304706dafd597932384c4aea6b26c5b894e4ef43a3de5dbed019.scope: Deactivated successfully.
Dec 09 16:16:21 compute-0 podman[179754]: 2025-12-09 16:16:21.776876193 +0000 UTC m=+0.098637179 container create 49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_curran, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:16:21 compute-0 podman[179754]: 2025-12-09 16:16:21.699503546 +0000 UTC m=+0.021264532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:16:21 compute-0 systemd[1]: Started libpod-conmon-49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8.scope.
Dec 09 16:16:21 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6a6c02b1687a7f5b4b6dddc04e8941b083268ac1fad2e7c1b1ee3c4e246486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6a6c02b1687a7f5b4b6dddc04e8941b083268ac1fad2e7c1b1ee3c4e246486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6a6c02b1687a7f5b4b6dddc04e8941b083268ac1fad2e7c1b1ee3c4e246486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6a6c02b1687a7f5b4b6dddc04e8941b083268ac1fad2e7c1b1ee3c4e246486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6a6c02b1687a7f5b4b6dddc04e8941b083268ac1fad2e7c1b1ee3c4e246486/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:21 compute-0 podman[179754]: 2025-12-09 16:16:21.938708847 +0000 UTC m=+0.260469843 container init 49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_curran, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 09 16:16:21 compute-0 podman[179754]: 2025-12-09 16:16:21.945000564 +0000 UTC m=+0.266761570 container start 49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_curran, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 09 16:16:21 compute-0 podman[179754]: 2025-12-09 16:16:21.965032089 +0000 UTC m=+0.286793095 container attach 49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_curran, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 09 16:16:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:22 compute-0 laughing_curran[179771]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:16:22 compute-0 laughing_curran[179771]: --> All data devices are unavailable
Dec 09 16:16:22 compute-0 systemd[1]: libpod-49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8.scope: Deactivated successfully.
Dec 09 16:16:22 compute-0 podman[179754]: 2025-12-09 16:16:22.421058797 +0000 UTC m=+0.742819773 container died 49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_curran, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:16:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e6a6c02b1687a7f5b4b6dddc04e8941b083268ac1fad2e7c1b1ee3c4e246486-merged.mount: Deactivated successfully.
Dec 09 16:16:22 compute-0 podman[179754]: 2025-12-09 16:16:22.78661446 +0000 UTC m=+1.108375476 container remove 49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_curran, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:16:22 compute-0 sudo[179675]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:22 compute-0 systemd[1]: libpod-conmon-49e9b7fb8f80455393358f88443afdc5b1d2e12d39163414a74912d388d542a8.scope: Deactivated successfully.
Dec 09 16:16:22 compute-0 sudo[179803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:16:22 compute-0 sudo[179803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:22 compute-0 sudo[179803]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:22 compute-0 sudo[179828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:16:22 compute-0 sudo[179828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:23 compute-0 podman[179863]: 2025-12-09 16:16:23.232303952 +0000 UTC m=+0.019872741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:16:23 compute-0 podman[179863]: 2025-12-09 16:16:23.346517242 +0000 UTC m=+0.134086001 container create b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:16:23 compute-0 ceph-mon[75222]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:23 compute-0 systemd[1]: Started libpod-conmon-b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439.scope.
Dec 09 16:16:23 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:16:23 compute-0 podman[179863]: 2025-12-09 16:16:23.609096298 +0000 UTC m=+0.396665077 container init b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:16:23 compute-0 podman[179863]: 2025-12-09 16:16:23.617144447 +0000 UTC m=+0.404713236 container start b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:16:23 compute-0 systemd[1]: libpod-b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439.scope: Deactivated successfully.
Dec 09 16:16:23 compute-0 frosty_black[179879]: 167 167
Dec 09 16:16:23 compute-0 conmon[179879]: conmon b32ba43d8106898f8b7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439.scope/container/memory.events
Dec 09 16:16:23 compute-0 podman[179863]: 2025-12-09 16:16:23.671295495 +0000 UTC m=+0.458864294 container attach b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_black, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:16:23 compute-0 podman[179863]: 2025-12-09 16:16:23.671771749 +0000 UTC m=+0.459340548 container died b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:16:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af741adfedcf9e6566dd6a175eeb8a954b88271cddddd0d28e5bdf8f69fc7c8-merged.mount: Deactivated successfully.
Dec 09 16:16:23 compute-0 podman[179863]: 2025-12-09 16:16:23.925698367 +0000 UTC m=+0.713267116 container remove b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_black, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:16:23 compute-0 systemd[1]: libpod-conmon-b32ba43d8106898f8b7d8536d85066c882cd22db01b73bd03a8d5f1b8923c439.scope: Deactivated successfully.
Dec 09 16:16:24 compute-0 podman[179904]: 2025-12-09 16:16:24.093391126 +0000 UTC m=+0.044807091 container create ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hertz, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:16:24 compute-0 systemd[1]: Started libpod-conmon-ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e.scope.
Dec 09 16:16:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:24 compute-0 podman[179904]: 2025-12-09 16:16:24.072232808 +0000 UTC m=+0.023648773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:16:24 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02dbd6a43870c1005d46f8ddd564a05b2391a088b91561fb1a10c6b36c010788/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02dbd6a43870c1005d46f8ddd564a05b2391a088b91561fb1a10c6b36c010788/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02dbd6a43870c1005d46f8ddd564a05b2391a088b91561fb1a10c6b36c010788/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02dbd6a43870c1005d46f8ddd564a05b2391a088b91561fb1a10c6b36c010788/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:24 compute-0 podman[179904]: 2025-12-09 16:16:24.223124577 +0000 UTC m=+0.174540642 container init ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:16:24 compute-0 podman[179904]: 2025-12-09 16:16:24.231852397 +0000 UTC m=+0.183268362 container start ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hertz, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:16:24 compute-0 podman[179904]: 2025-12-09 16:16:24.307596845 +0000 UTC m=+0.259012810 container attach ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hertz, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:16:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:24 compute-0 musing_hertz[179921]: {
Dec 09 16:16:24 compute-0 musing_hertz[179921]:     "0": [
Dec 09 16:16:24 compute-0 musing_hertz[179921]:         {
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "devices": [
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "/dev/loop3"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             ],
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_name": "ceph_lv0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_size": "21470642176",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "name": "ceph_lv0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "tags": {
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cluster_name": "ceph",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.crush_device_class": "",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.encrypted": "0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.objectstore": "bluestore",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osd_id": "0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.type": "block",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.vdo": "0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.with_tpm": "0"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             },
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "type": "block",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "vg_name": "ceph_vg0"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:         }
Dec 09 16:16:24 compute-0 musing_hertz[179921]:     ],
Dec 09 16:16:24 compute-0 musing_hertz[179921]:     "1": [
Dec 09 16:16:24 compute-0 musing_hertz[179921]:         {
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "devices": [
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "/dev/loop4"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             ],
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_name": "ceph_lv1",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_size": "21470642176",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "name": "ceph_lv1",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "tags": {
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cluster_name": "ceph",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.crush_device_class": "",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.encrypted": "0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.objectstore": "bluestore",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osd_id": "1",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.type": "block",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.vdo": "0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.with_tpm": "0"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             },
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "type": "block",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "vg_name": "ceph_vg1"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:         }
Dec 09 16:16:24 compute-0 musing_hertz[179921]:     ],
Dec 09 16:16:24 compute-0 musing_hertz[179921]:     "2": [
Dec 09 16:16:24 compute-0 musing_hertz[179921]:         {
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "devices": [
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "/dev/loop5"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             ],
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_name": "ceph_lv2",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_size": "21470642176",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "name": "ceph_lv2",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "tags": {
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.cluster_name": "ceph",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.crush_device_class": "",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.encrypted": "0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.objectstore": "bluestore",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osd_id": "2",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.type": "block",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.vdo": "0",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:                 "ceph.with_tpm": "0"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             },
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "type": "block",
Dec 09 16:16:24 compute-0 musing_hertz[179921]:             "vg_name": "ceph_vg2"
Dec 09 16:16:24 compute-0 musing_hertz[179921]:         }
Dec 09 16:16:24 compute-0 musing_hertz[179921]:     ]
Dec 09 16:16:24 compute-0 musing_hertz[179921]: }
Dec 09 16:16:24 compute-0 systemd[1]: libpod-ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e.scope: Deactivated successfully.
Dec 09 16:16:24 compute-0 podman[179904]: 2025-12-09 16:16:24.528532714 +0000 UTC m=+0.479948679 container died ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:16:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-02dbd6a43870c1005d46f8ddd564a05b2391a088b91561fb1a10c6b36c010788-merged.mount: Deactivated successfully.
Dec 09 16:16:24 compute-0 podman[179904]: 2025-12-09 16:16:24.748497365 +0000 UTC m=+0.699913350 container remove ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:16:24 compute-0 systemd[1]: libpod-conmon-ff9637200ef08e07e7c07697fbbb27e7248c344f5bcf501c374bc7775da3318e.scope: Deactivated successfully.
Dec 09 16:16:24 compute-0 sudo[179828]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:24 compute-0 sudo[179941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:16:24 compute-0 sudo[179941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:24 compute-0 sudo[179941]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:24 compute-0 sudo[179966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:16:24 compute-0 sudo[179966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:25 compute-0 podman[180003]: 2025-12-09 16:16:25.214049456 +0000 UTC m=+0.053905701 container create 0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:16:25 compute-0 podman[180003]: 2025-12-09 16:16:25.181805819 +0000 UTC m=+0.021662094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:16:25 compute-0 systemd[1]: Started libpod-conmon-0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584.scope.
Dec 09 16:16:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:16:25 compute-0 podman[180003]: 2025-12-09 16:16:25.435929764 +0000 UTC m=+0.275786009 container init 0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:16:25 compute-0 ceph-mon[75222]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:25 compute-0 podman[180003]: 2025-12-09 16:16:25.448095395 +0000 UTC m=+0.287951650 container start 0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:16:25 compute-0 exciting_babbage[180019]: 167 167
Dec 09 16:16:25 compute-0 systemd[1]: libpod-0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584.scope: Deactivated successfully.
Dec 09 16:16:25 compute-0 podman[180003]: 2025-12-09 16:16:25.469360916 +0000 UTC m=+0.309217161 container attach 0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:16:25 compute-0 podman[180003]: 2025-12-09 16:16:25.471117718 +0000 UTC m=+0.310973963 container died 0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2148e7ef13986925aeeb07ed6c941bb084f9ffb29ab606179c768837b7096b39-merged.mount: Deactivated successfully.
Dec 09 16:16:25 compute-0 podman[180003]: 2025-12-09 16:16:25.51163747 +0000 UTC m=+0.351493725 container remove 0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:16:25 compute-0 systemd[1]: libpod-conmon-0ead153afa3050c75e27cf7462412228bee42c64f8b6b271fdd0439af30fb584.scope: Deactivated successfully.
Dec 09 16:16:25 compute-0 podman[180044]: 2025-12-09 16:16:25.642296689 +0000 UTC m=+0.020941282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:16:25 compute-0 podman[180044]: 2025-12-09 16:16:25.849244563 +0000 UTC m=+0.227889166 container create e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:16:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:16:25
Dec 09 16:16:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:16:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:16:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups']
Dec 09 16:16:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:16:25 compute-0 systemd[1]: Started libpod-conmon-e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13.scope.
Dec 09 16:16:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85d5785049cfa3a5657e4d41194bbdd475f05a8f1b40749207d9862c9522268/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85d5785049cfa3a5657e4d41194bbdd475f05a8f1b40749207d9862c9522268/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85d5785049cfa3a5657e4d41194bbdd475f05a8f1b40749207d9862c9522268/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85d5785049cfa3a5657e4d41194bbdd475f05a8f1b40749207d9862c9522268/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:16:26 compute-0 podman[180044]: 2025-12-09 16:16:26.158967499 +0000 UTC m=+0.537612112 container init e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:26 compute-0 podman[180044]: 2025-12-09 16:16:26.166798711 +0000 UTC m=+0.545443294 container start e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_ramanujan, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:16:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:16:26 compute-0 lvm[180143]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:16:26 compute-0 lvm[180142]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:16:26 compute-0 lvm[180142]: VG ceph_vg0 finished
Dec 09 16:16:26 compute-0 lvm[180143]: VG ceph_vg1 finished
Dec 09 16:16:26 compute-0 lvm[180145]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:16:26 compute-0 lvm[180145]: VG ceph_vg2 finished
Dec 09 16:16:26 compute-0 lvm[180147]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:16:26 compute-0 lvm[180147]: VG ceph_vg2 finished
Dec 09 16:16:26 compute-0 podman[180044]: 2025-12-09 16:16:26.954747284 +0000 UTC m=+1.333391867 container attach e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:16:26 compute-0 heuristic_ramanujan[180061]: {}
Dec 09 16:16:27 compute-0 systemd[1]: libpod-e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13.scope: Deactivated successfully.
Dec 09 16:16:27 compute-0 systemd[1]: libpod-e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13.scope: Consumed 1.274s CPU time.
Dec 09 16:16:27 compute-0 podman[180044]: 2025-12-09 16:16:27.015374334 +0000 UTC m=+1.394018927 container died e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:16:27 compute-0 ceph-mon[75222]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e85d5785049cfa3a5657e4d41194bbdd475f05a8f1b40749207d9862c9522268-merged.mount: Deactivated successfully.
Dec 09 16:16:27 compute-0 podman[180044]: 2025-12-09 16:16:27.081755264 +0000 UTC m=+1.460399867 container remove e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:16:27 compute-0 systemd[1]: libpod-conmon-e6fd592ffef01e844d324995ea361351fc25f7e47f1f9fde483ac286ec47ad13.scope: Deactivated successfully.
Dec 09 16:16:27 compute-0 sudo[179966]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:16:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:16:27 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:27 compute-0 kernel: SELinux:  Converting 2771 SID table entries...
Dec 09 16:16:27 compute-0 sudo[180161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:16:27 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 16:16:27 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 16:16:27 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 16:16:27 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 16:16:27 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 16:16:27 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 16:16:27 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 16:16:27 compute-0 sudo[180161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:16:27 compute-0 sudo[180161]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:16:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:28 compute-0 groupadd[180194]: group added to /etc/group: name=dnsmasq, GID=991
Dec 09 16:16:28 compute-0 groupadd[180194]: group added to /etc/gshadow: name=dnsmasq
Dec 09 16:16:28 compute-0 groupadd[180194]: new group: name=dnsmasq, GID=991
Dec 09 16:16:28 compute-0 useradd[180201]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 09 16:16:28 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Dec 09 16:16:28 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 09 16:16:28 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Dec 09 16:16:29 compute-0 ceph-mon[75222]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:29 compute-0 groupadd[180214]: group added to /etc/group: name=clevis, GID=990
Dec 09 16:16:29 compute-0 groupadd[180214]: group added to /etc/gshadow: name=clevis
Dec 09 16:16:29 compute-0 groupadd[180214]: new group: name=clevis, GID=990
Dec 09 16:16:29 compute-0 useradd[180221]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 09 16:16:29 compute-0 usermod[180231]: add 'clevis' to group 'tss'
Dec 09 16:16:29 compute-0 usermod[180231]: add 'clevis' to shadow group 'tss'
Dec 09 16:16:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:31 compute-0 ceph-mon[75222]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:31 compute-0 polkitd[43504]: Reloading rules
Dec 09 16:16:31 compute-0 polkitd[43504]: Collecting garbage unconditionally...
Dec 09 16:16:31 compute-0 polkitd[43504]: Loading rules from directory /etc/polkit-1/rules.d
Dec 09 16:16:31 compute-0 polkitd[43504]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 09 16:16:31 compute-0 polkitd[43504]: Finished loading, compiling and executing 3 rules
Dec 09 16:16:31 compute-0 polkitd[43504]: Reloading rules
Dec 09 16:16:31 compute-0 polkitd[43504]: Collecting garbage unconditionally...
Dec 09 16:16:31 compute-0 polkitd[43504]: Loading rules from directory /etc/polkit-1/rules.d
Dec 09 16:16:31 compute-0 polkitd[43504]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 09 16:16:31 compute-0 polkitd[43504]: Finished loading, compiling and executing 3 rules
Dec 09 16:16:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:33 compute-0 ceph-mon[75222]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:35 compute-0 ceph-mon[75222]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:36 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 09 16:16:36 compute-0 sshd[1005]: Received signal 15; terminating.
Dec 09 16:16:36 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 09 16:16:36 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 09 16:16:36 compute-0 systemd[1]: sshd.service: Consumed 4.783s CPU time, read 32.0K from disk, written 120.0K to disk.
Dec 09 16:16:36 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 09 16:16:36 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 09 16:16:36 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 16:16:36 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 16:16:36 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 16:16:36 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 09 16:16:36 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 09 16:16:36 compute-0 sshd[181036]: Server listening on 0.0.0.0 port 22.
Dec 09 16:16:36 compute-0 sshd[181036]: Server listening on :: port 22.
Dec 09 16:16:36 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:16:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:16:37 compute-0 ceph-mon[75222]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 16:16:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 16:16:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:38 compute-0 systemd[1]: Reloading.
Dec 09 16:16:38 compute-0 systemd-sysv-generator[181296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:38 compute-0 systemd-rc-local-generator[181289]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:38 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 16:16:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:40 compute-0 ceph-mon[75222]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:41 compute-0 sudo[161524]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:41 compute-0 ceph-mon[75222]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:42 compute-0 sudo[185869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omzylqzrwuflfuktetmypsosdjxrkmna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297001.557541-336-196256954774452/AnsiballZ_systemd.py'
Dec 09 16:16:42 compute-0 sudo[185869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:42 compute-0 python3.9[185895]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 16:16:42 compute-0 systemd[1]: Reloading.
Dec 09 16:16:42 compute-0 systemd-sysv-generator[186374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:42 compute-0 systemd-rc-local-generator[186370]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:42 compute-0 sudo[185869]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:43 compute-0 sudo[187158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btfgoojsftvlsaeffkhddfepdswllite ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297003.0581703-336-276628066965984/AnsiballZ_systemd.py'
Dec 09 16:16:43 compute-0 sudo[187158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:43 compute-0 python3.9[187160]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 16:16:43 compute-0 systemd[1]: Reloading.
Dec 09 16:16:43 compute-0 ceph-mon[75222]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:43 compute-0 systemd-rc-local-generator[187689]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:43 compute-0 systemd-sysv-generator[187694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:45 compute-0 sudo[187158]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:45 compute-0 sudo[189550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtjdpmchyjflvwpstcqmevullqkjuwpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297005.2113535-336-255761496485002/AnsiballZ_systemd.py'
Dec 09 16:16:45 compute-0 sudo[189550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:45 compute-0 podman[189488]: 2025-12-09 16:16:45.61534281 +0000 UTC m=+0.156763365 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:16:45 compute-0 ceph-mon[75222]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:45 compute-0 python3.9[189577]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 16:16:45 compute-0 systemd[1]: Reloading.
Dec 09 16:16:45 compute-0 systemd-rc-local-generator[190000]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:45 compute-0 systemd-sysv-generator[190004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:46 compute-0 sudo[189550]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:46 compute-0 sudo[190431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmotbmvgfnmwglaclceehygiicvfygck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297006.3829892-336-251723621775960/AnsiballZ_systemd.py'
Dec 09 16:16:46 compute-0 sudo[190431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:46 compute-0 ceph-mon[75222]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:47 compute-0 python3.9[190433]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 16:16:47 compute-0 systemd[1]: Reloading.
Dec 09 16:16:47 compute-0 systemd-rc-local-generator[190464]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:47 compute-0 systemd-sysv-generator[190467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 16:16:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 16:16:47 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.138s CPU time.
Dec 09 16:16:47 compute-0 systemd[1]: run-r075a37a6f263466cb6c8023f98b59a02.service: Deactivated successfully.
Dec 09 16:16:47 compute-0 sudo[190431]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:48 compute-0 sudo[190640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvwsphzgklbipxgpkftecxdptejykqxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297007.7722065-365-90490203395744/AnsiballZ_systemd.py'
Dec 09 16:16:48 compute-0 sudo[190640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:48 compute-0 podman[190597]: 2025-12-09 16:16:48.090512863 +0000 UTC m=+0.055088067 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:16:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:48 compute-0 python3.9[190644]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:48 compute-0 systemd[1]: Reloading.
Dec 09 16:16:48 compute-0 systemd-rc-local-generator[190668]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:48 compute-0 systemd-sysv-generator[190676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:48 compute-0 sudo[190640]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:49 compute-0 sudo[190834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgkhtjamswcobvkzstpsbqgewijhzrsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297008.8629127-365-217011024479462/AnsiballZ_systemd.py'
Dec 09 16:16:49 compute-0 sudo[190834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:49 compute-0 ceph-mon[75222]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:49 compute-0 sshd-session[190782]: Invalid user odoo from 146.190.31.45 port 42536
Dec 09 16:16:49 compute-0 python3.9[190836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:49 compute-0 sshd-session[190782]: Connection closed by invalid user odoo 146.190.31.45 port 42536 [preauth]
Dec 09 16:16:49 compute-0 systemd[1]: Reloading.
Dec 09 16:16:49 compute-0 systemd-rc-local-generator[190864]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:49 compute-0 systemd-sysv-generator[190868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:49 compute-0 sudo[190834]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:50 compute-0 sudo[191024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmwiercowqndzboqdhhkbixrkythosq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297009.9840877-365-99635315300761/AnsiballZ_systemd.py'
Dec 09 16:16:50 compute-0 sudo[191024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:50 compute-0 python3.9[191026]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:50 compute-0 systemd[1]: Reloading.
Dec 09 16:16:51 compute-0 systemd-rc-local-generator[191055]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:51 compute-0 systemd-sysv-generator[191060]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:51 compute-0 ceph-mon[75222]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:51 compute-0 sudo[191024]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:51 compute-0 sudo[191215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvidznguaaxkpfhodkbxpdnbcqobbpnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297011.422532-365-202355458596663/AnsiballZ_systemd.py'
Dec 09 16:16:51 compute-0 sudo[191215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:52 compute-0 python3.9[191217]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:52 compute-0 sudo[191215]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:52 compute-0 sudo[191370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgizfqvlsbawyhvctcyucfrtwnkdfdyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297012.246026-365-145003612747704/AnsiballZ_systemd.py'
Dec 09 16:16:52 compute-0 sudo[191370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:52 compute-0 python3.9[191372]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:52 compute-0 systemd[1]: Reloading.
Dec 09 16:16:53 compute-0 systemd-rc-local-generator[191398]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:53 compute-0 systemd-sysv-generator[191403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:53 compute-0 sudo[191370]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:53 compute-0 ceph-mon[75222]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:54 compute-0 sudo[191559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxrnlvjpleybrkobsqbvnrzdzuotytzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297013.6641676-401-196924458480833/AnsiballZ_systemd.py'
Dec 09 16:16:54 compute-0 sudo[191559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:54 compute-0 python3.9[191561]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 16:16:54 compute-0 systemd[1]: Reloading.
Dec 09 16:16:54 compute-0 systemd-sysv-generator[191594]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:16:54 compute-0 systemd-rc-local-generator[191588]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:16:54 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 09 16:16:54 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 09 16:16:54 compute-0 sudo[191559]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:16:55 compute-0 sudo[191752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cctvylmierxqzutvjvatlcgbsplqpzbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297015.0599477-409-185425970637247/AnsiballZ_systemd.py'
Dec 09 16:16:55 compute-0 sudo[191752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:55 compute-0 ceph-mon[75222]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:55 compute-0 python3.9[191754]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:55 compute-0 sudo[191752]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:56 compute-0 sudo[191907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odvtrskiihlfqdbydwrxwnsxdbgbthqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297015.868889-409-208002431420692/AnsiballZ_systemd.py'
Dec 09 16:16:56 compute-0 sudo[191907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:16:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:16:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:16:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:16:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:16:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:16:56 compute-0 python3.9[191909]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:56 compute-0 sudo[191907]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:57 compute-0 sudo[192062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgnmmjxfxpcjxkyqnaexhmdfsjthfqxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297016.7321358-409-178907021365382/AnsiballZ_systemd.py'
Dec 09 16:16:57 compute-0 sudo[192062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:57 compute-0 python3.9[192064]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:57 compute-0 sudo[192062]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:57 compute-0 sudo[192217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nafugqvoookrycibasbgtkccqsrfhfvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297017.5605717-409-218778586072881/AnsiballZ_systemd.py'
Dec 09 16:16:57 compute-0 sudo[192217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:58 compute-0 python3.9[192219]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:58 compute-0 ceph-mon[75222]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:58 compute-0 sudo[192217]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:58 compute-0 sudo[192372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjgkrvdvylbtanocyfjukhnuulgjenla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297018.4262726-409-179826686195982/AnsiballZ_systemd.py'
Dec 09 16:16:58 compute-0 sudo[192372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:59 compute-0 python3.9[192374]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:16:59 compute-0 sudo[192372]: pam_unix(sudo:session): session closed for user root
Dec 09 16:16:59 compute-0 ceph-mon[75222]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:16:59 compute-0 sudo[192527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnwqzkedcabefglvqgdntswhduzsblbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297019.3287692-409-51980385040581/AnsiballZ_systemd.py'
Dec 09 16:16:59 compute-0 sudo[192527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:16:59 compute-0 python3.9[192529]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:00 compute-0 sudo[192527]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:00 compute-0 sudo[192682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpzrnuvjrfjqcfzflajaviqiegzqliks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297020.1963453-409-265009584339227/AnsiballZ_systemd.py'
Dec 09 16:17:00 compute-0 sudo[192682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:00 compute-0 python3.9[192684]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:00 compute-0 sudo[192682]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:01 compute-0 sudo[192837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkxgzhcuhqgdhuaqvxyrkdwzieghmjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297021.0935957-409-157148090147578/AnsiballZ_systemd.py'
Dec 09 16:17:01 compute-0 sudo[192837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:01 compute-0 python3.9[192839]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:01 compute-0 ceph-mon[75222]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:01 compute-0 sudo[192837]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:02 compute-0 sudo[192992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hivtaxufutemupjbzditoizjxtrmirgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297021.8832233-409-226323472595939/AnsiballZ_systemd.py'
Dec 09 16:17:02 compute-0 sudo[192992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:02 compute-0 python3.9[192994]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:02 compute-0 sudo[192992]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:03 compute-0 sudo[193147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mstxiwiigjryxbmyoywxvzwwoocbrche ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297022.7997315-409-872387858622/AnsiballZ_systemd.py'
Dec 09 16:17:03 compute-0 sudo[193147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:03 compute-0 python3.9[193149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:03 compute-0 sudo[193147]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:03 compute-0 ceph-mon[75222]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:04 compute-0 sudo[193302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmldewumuadlxhglnxbfewwogymdmrsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297023.7235706-409-104664229033350/AnsiballZ_systemd.py'
Dec 09 16:17:04 compute-0 sudo[193302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:04 compute-0 python3.9[193304]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:04 compute-0 sudo[193302]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.710119) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297024710155, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2039, "num_deletes": 251, "total_data_size": 3582259, "memory_usage": 3633392, "flush_reason": "Manual Compaction"}
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297024733366, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3506024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9725, "largest_seqno": 11763, "table_properties": {"data_size": 3496720, "index_size": 5926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17743, "raw_average_key_size": 19, "raw_value_size": 3478354, "raw_average_value_size": 3809, "num_data_blocks": 269, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296790, "oldest_key_time": 1765296790, "file_creation_time": 1765297024, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 23288 microseconds, and 8677 cpu microseconds.
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.733407) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3506024 bytes OK
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.733424) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.735216) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.735231) EVENT_LOG_v1 {"time_micros": 1765297024735227, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.735249) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3573759, prev total WAL file size 3573759, number of live WAL files 2.
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.736279) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3423KB)], [26(6043KB)]
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297024736329, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9694056, "oldest_snapshot_seqno": -1}
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3710 keys, 8100545 bytes, temperature: kUnknown
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297024789537, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8100545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8071877, "index_size": 18318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 89146, "raw_average_key_size": 24, "raw_value_size": 8001027, "raw_average_value_size": 2156, "num_data_blocks": 791, "num_entries": 3710, "num_filter_entries": 3710, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765297024, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.789828) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8100545 bytes
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.791174) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.9 rd, 152.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4224, records dropped: 514 output_compression: NoCompression
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.791195) EVENT_LOG_v1 {"time_micros": 1765297024791184, "job": 10, "event": "compaction_finished", "compaction_time_micros": 53279, "compaction_time_cpu_micros": 23842, "output_level": 6, "num_output_files": 1, "total_output_size": 8100545, "num_input_records": 4224, "num_output_records": 3710, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297024791987, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297024793426, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.736182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.793462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.793465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.793467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.793468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:17:04 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:17:04.793470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:17:04 compute-0 sudo[193457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sozwmdjmaqrbzjzhkarwwhgxcezkevme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297024.5238779-409-145858496162001/AnsiballZ_systemd.py'
Dec 09 16:17:04 compute-0 sudo[193457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:05 compute-0 python3.9[193459]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:05 compute-0 sudo[193457]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:05 compute-0 sudo[193612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvswkchulzyxivgxjhjeemvdmvadltku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297025.3128655-409-16137790636339/AnsiballZ_systemd.py'
Dec 09 16:17:05 compute-0 sudo[193612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:05 compute-0 ceph-mon[75222]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:05 compute-0 python3.9[193614]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:05 compute-0 sudo[193612]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:06 compute-0 sudo[193767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfpdcjbqjutzwhmnekoqhaxxnzjacnod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297026.1008945-409-18196928522488/AnsiballZ_systemd.py'
Dec 09 16:17:06 compute-0 sudo[193767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:06 compute-0 python3.9[193769]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 16:17:06 compute-0 sudo[193767]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:07 compute-0 sudo[193922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-robidrlitqharfnvdqniknwusjrkfuiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297027.3188639-511-244819798314709/AnsiballZ_file.py'
Dec 09 16:17:07 compute-0 sudo[193922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:07 compute-0 ceph-mon[75222]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:07 compute-0 python3.9[193924]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:17:07 compute-0 sudo[193922]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:08 compute-0 sudo[194074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkpqsegdylqamfijymreeeoosfuucigm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297027.9892938-511-70543582633640/AnsiballZ_file.py'
Dec 09 16:17:08 compute-0 sudo[194074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:08 compute-0 python3.9[194076]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:17:08 compute-0 sudo[194074]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:09 compute-0 sudo[194226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtwnlhtvnjcbcnesnumlfcfryddmvvil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297028.6868465-511-18249664227325/AnsiballZ_file.py'
Dec 09 16:17:09 compute-0 sudo[194226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:09 compute-0 python3.9[194228]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:17:09 compute-0 sudo[194226]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:09 compute-0 ceph-mon[75222]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:09 compute-0 sudo[194378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afhaswiykxjquyitaaonpadenktwltpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297029.4334157-511-93781598383640/AnsiballZ_file.py'
Dec 09 16:17:09 compute-0 sudo[194378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:10 compute-0 python3.9[194380]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:17:10 compute-0 sudo[194378]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:10 compute-0 sudo[194530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cegalbwrsgkmrodhpllwibvhbdfbqotg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297030.1638484-511-105172313503475/AnsiballZ_file.py'
Dec 09 16:17:10 compute-0 sudo[194530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:10 compute-0 python3.9[194532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:17:10 compute-0 sudo[194530]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:11 compute-0 sudo[194682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpcttchwnulqntwlnkibcphlueykuxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297030.7625713-511-38515785136009/AnsiballZ_file.py'
Dec 09 16:17:11 compute-0 sudo[194682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:11 compute-0 python3.9[194684]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:17:11 compute-0 sudo[194682]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:11 compute-0 ceph-mon[75222]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:12 compute-0 sudo[194834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcjzqzzpdxgihyafuprhvafpgqagxsgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297031.579804-554-225500017482888/AnsiballZ_stat.py'
Dec 09 16:17:12 compute-0 sudo[194834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:12 compute-0 python3.9[194836]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:12 compute-0 sudo[194834]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:12 compute-0 sudo[194959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhpgdtvqafmwcmxulmnlmnnlsetkvjcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297031.579804-554-225500017482888/AnsiballZ_copy.py'
Dec 09 16:17:12 compute-0 sudo[194959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:13 compute-0 python3.9[194961]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765297031.579804-554-225500017482888/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:13 compute-0 sudo[194959]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:13 compute-0 sudo[195111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeiqefywrzunrlxqonowmdxmeeoshxel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297033.2852888-554-96336062449685/AnsiballZ_stat.py'
Dec 09 16:17:13 compute-0 sudo[195111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:13 compute-0 python3.9[195113]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:13 compute-0 ceph-mon[75222]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:13 compute-0 sudo[195111]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:14 compute-0 sudo[195236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paqnhtvcjylxkzjetlprsacpuqiogwgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297033.2852888-554-96336062449685/AnsiballZ_copy.py'
Dec 09 16:17:14 compute-0 sudo[195236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:14 compute-0 python3.9[195238]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765297033.2852888-554-96336062449685/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:14 compute-0 sudo[195236]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:14 compute-0 sudo[195388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysujljpddfodvzdfkbwgbpmajtoqmdpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297034.5252619-554-246662011485805/AnsiballZ_stat.py'
Dec 09 16:17:14 compute-0 sudo[195388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:15 compute-0 python3.9[195390]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:15 compute-0 sudo[195388]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:15 compute-0 sudo[195513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjlslxvndsnvlgwqvqnesdwgnuyrfgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297034.5252619-554-246662011485805/AnsiballZ_copy.py'
Dec 09 16:17:15 compute-0 sudo[195513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:15 compute-0 python3.9[195515]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765297034.5252619-554-246662011485805/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:15 compute-0 sudo[195513]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:15 compute-0 ceph-mon[75222]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:15 compute-0 podman[195516]: 2025-12-09 16:17:15.921626981 +0000 UTC m=+0.136939567 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 09 16:17:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:16 compute-0 sudo[195691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdqyjslhtmirtyirspqurcptxvngemsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297035.9645875-554-18379031466970/AnsiballZ_stat.py'
Dec 09 16:17:16 compute-0 sudo[195691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:16 compute-0 python3.9[195693]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:16 compute-0 sudo[195691]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:16 compute-0 ceph-mon[75222]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:16 compute-0 sudo[195816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbobqyxomxvtoqrpxkctxqabvnhsyhvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297035.9645875-554-18379031466970/AnsiballZ_copy.py'
Dec 09 16:17:16 compute-0 sudo[195816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:17 compute-0 python3.9[195818]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765297035.9645875-554-18379031466970/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:17 compute-0 sudo[195816]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:17 compute-0 sudo[195968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zswdwlqxabcouomwtqyisblzgjvskswb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297037.3269455-554-145569214355054/AnsiballZ_stat.py'
Dec 09 16:17:17 compute-0 sudo[195968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:17 compute-0 python3.9[195970]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:17:17.833 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:17:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:17:17.833 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:17:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:17:17.834 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:17:17 compute-0 sudo[195968]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:18 compute-0 podman[196067]: 2025-12-09 16:17:18.211293908 +0000 UTC m=+0.056921224 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 16:17:18 compute-0 sudo[196112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ittylvsmhmwfkrituldptuzpkenwxtrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297037.3269455-554-145569214355054/AnsiballZ_copy.py'
Dec 09 16:17:18 compute-0 sudo[196112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:18 compute-0 python3.9[196114]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765297037.3269455-554-145569214355054/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:18 compute-0 sudo[196112]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:18 compute-0 sudo[196265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqcaylqfjbjjbmkdvofklmmbktjqpkat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297038.5849037-554-189464703949916/AnsiballZ_stat.py'
Dec 09 16:17:18 compute-0 sudo[196265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:19 compute-0 python3.9[196267]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:19 compute-0 sudo[196265]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:19 compute-0 ceph-mon[75222]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:19 compute-0 sudo[196390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtfanzqtoyyshtsiouuxthoeubcftlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297038.5849037-554-189464703949916/AnsiballZ_copy.py'
Dec 09 16:17:19 compute-0 sudo[196390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:19 compute-0 python3.9[196392]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765297038.5849037-554-189464703949916/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:19 compute-0 sudo[196390]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:20 compute-0 sudo[196542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpyontzwbdzxldelznhzkopzsfruihpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297039.982251-554-13750779484898/AnsiballZ_stat.py'
Dec 09 16:17:20 compute-0 sudo[196542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:20 compute-0 python3.9[196544]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:20 compute-0 sudo[196542]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:20 compute-0 sudo[196665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amunjlqapjqslmdcgfgtqhfealdzmukb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297039.982251-554-13750779484898/AnsiballZ_copy.py'
Dec 09 16:17:20 compute-0 sudo[196665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:21 compute-0 python3.9[196667]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765297039.982251-554-13750779484898/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:21 compute-0 sudo[196665]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:21 compute-0 ceph-mon[75222]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:21 compute-0 sudo[196817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjrdyyrmvkkgqprbylhrjvqtutxnetwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297041.2401786-554-67399571199680/AnsiballZ_stat.py'
Dec 09 16:17:21 compute-0 sudo[196817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:21 compute-0 python3.9[196819]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:21 compute-0 sudo[196817]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:22 compute-0 sudo[196942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byyswmkoovwietzeanulipvxmoddtosk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297041.2401786-554-67399571199680/AnsiballZ_copy.py'
Dec 09 16:17:22 compute-0 sudo[196942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:22 compute-0 python3.9[196944]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765297041.2401786-554-67399571199680/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:22 compute-0 sudo[196942]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:22 compute-0 sudo[197094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczyjwyfcjuxiwcfkmxvdggqtockeujv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297042.4895022-667-47748899736918/AnsiballZ_command.py'
Dec 09 16:17:22 compute-0 sudo[197094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:22 compute-0 python3.9[197096]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 09 16:17:23 compute-0 sudo[197094]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:23 compute-0 ceph-mon[75222]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:23 compute-0 sudo[197247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksxorxzjnrofhvcokxrqgmxdtuslnhnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297043.4353678-676-96547210726110/AnsiballZ_file.py'
Dec 09 16:17:23 compute-0 sudo[197247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:23 compute-0 python3.9[197249]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:23 compute-0 sudo[197247]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:24 compute-0 sudo[197399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnqrzgaasumggkffaflaibcytgkjgafj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297044.0976384-676-223071381769583/AnsiballZ_file.py'
Dec 09 16:17:24 compute-0 sudo[197399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:24 compute-0 python3.9[197401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:24 compute-0 sudo[197399]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:25 compute-0 sudo[197551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wikhjxtmlghtgrgvqxccwwanugsazqaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297044.9187307-676-19362857703031/AnsiballZ_file.py'
Dec 09 16:17:25 compute-0 sudo[197551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:25 compute-0 python3.9[197553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:25 compute-0 sudo[197551]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:25 compute-0 ceph-mon[75222]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:25 compute-0 sudo[197703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwffwiigekhcsyjzadorxlbdgitczvbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297045.5239983-676-139891481442732/AnsiballZ_file.py'
Dec 09 16:17:25 compute-0 sudo[197703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:17:25
Dec 09 16:17:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:17:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:17:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['volumes', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data']
Dec 09 16:17:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:17:25 compute-0 python3.9[197705]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:26 compute-0 sudo[197703]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:26 compute-0 sudo[197855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lenjlfpfizrjqsdjuvnrleakayydudlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297046.1731608-676-118737109425840/AnsiballZ_file.py'
Dec 09 16:17:26 compute-0 sudo[197855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:17:26 compute-0 python3.9[197857]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:17:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:17:26 compute-0 sudo[197855]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:27 compute-0 sudo[198007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwykymteccxpxrmvwlismivvgqqpgbmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297046.835802-676-212322881080239/AnsiballZ_file.py'
Dec 09 16:17:27 compute-0 sudo[198007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:27 compute-0 sudo[198010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:17:27 compute-0 sudo[198010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:27 compute-0 sudo[198010]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:27 compute-0 python3.9[198009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:27 compute-0 sudo[198007]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:27 compute-0 sudo[198035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:17:27 compute-0 sudo[198035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:27 compute-0 ceph-mon[75222]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:27 compute-0 sudo[198223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlalowhgoaubmvqrocmwspebzsfcyuln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297047.566087-676-206132876886588/AnsiballZ_file.py'
Dec 09 16:17:27 compute-0 sudo[198223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:28 compute-0 sudo[198035]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:28 compute-0 python3.9[198227]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:17:28 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:17:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:17:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:17:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:17:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:17:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:17:28 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:17:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:17:28 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:17:28 compute-0 sudo[198223]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:17:28 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:17:28 compute-0 sudo[198242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:17:28 compute-0 sudo[198242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:28 compute-0 sudo[198242]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:28 compute-0 sudo[198290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:17:28 compute-0 sudo[198290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:28 compute-0 podman[198427]: 2025-12-09 16:17:28.482385595 +0000 UTC m=+0.040560124 container create a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:17:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:17:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:17:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:17:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:17:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:17:28 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:17:28 compute-0 systemd[1]: Started libpod-conmon-a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18.scope.
Dec 09 16:17:28 compute-0 sudo[198470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpvyuungzwcoldysryceoqwhlifszrsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297048.2239501-676-232383127357901/AnsiballZ_file.py'
Dec 09 16:17:28 compute-0 sudo[198470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:28 compute-0 podman[198427]: 2025-12-09 16:17:28.464756618 +0000 UTC m=+0.022931167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:17:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:17:28 compute-0 podman[198427]: 2025-12-09 16:17:28.58341462 +0000 UTC m=+0.141589199 container init a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_snyder, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:17:28 compute-0 podman[198427]: 2025-12-09 16:17:28.593350929 +0000 UTC m=+0.151525498 container start a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:17:28 compute-0 podman[198427]: 2025-12-09 16:17:28.597226648 +0000 UTC m=+0.155401228 container attach a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:17:28 compute-0 quirky_snyder[198472]: 167 167
Dec 09 16:17:28 compute-0 systemd[1]: libpod-a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18.scope: Deactivated successfully.
Dec 09 16:17:28 compute-0 conmon[198472]: conmon a69e598aa0fc842a763b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18.scope/container/memory.events
Dec 09 16:17:28 compute-0 podman[198427]: 2025-12-09 16:17:28.605659826 +0000 UTC m=+0.163834355 container died a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_snyder, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:17:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a50b35b4a28fdfe75e1be34e41a1bd63922f620559386aafb3d241737144ba9b-merged.mount: Deactivated successfully.
Dec 09 16:17:28 compute-0 podman[198427]: 2025-12-09 16:17:28.663197576 +0000 UTC m=+0.221372095 container remove a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_snyder, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:17:28 compute-0 systemd[1]: libpod-conmon-a69e598aa0fc842a763b40e9488d645d828afc93f8175ddd6893c946f35ece18.scope: Deactivated successfully.
Dec 09 16:17:28 compute-0 python3.9[198474]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:28 compute-0 sudo[198470]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:28 compute-0 podman[198497]: 2025-12-09 16:17:28.854960456 +0000 UTC m=+0.050631507 container create e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:17:28 compute-0 systemd[1]: Started libpod-conmon-e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08.scope.
Dec 09 16:17:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a901fc5cf092442e0a0eb10021692206e97afa8f1893c38f06b0024ea6e8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a901fc5cf092442e0a0eb10021692206e97afa8f1893c38f06b0024ea6e8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a901fc5cf092442e0a0eb10021692206e97afa8f1893c38f06b0024ea6e8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a901fc5cf092442e0a0eb10021692206e97afa8f1893c38f06b0024ea6e8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6a901fc5cf092442e0a0eb10021692206e97afa8f1893c38f06b0024ea6e8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:28 compute-0 podman[198497]: 2025-12-09 16:17:28.932276933 +0000 UTC m=+0.127947994 container init e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:17:28 compute-0 podman[198497]: 2025-12-09 16:17:28.83555084 +0000 UTC m=+0.031221911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:17:28 compute-0 podman[198497]: 2025-12-09 16:17:28.942301436 +0000 UTC m=+0.137972497 container start e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:17:28 compute-0 podman[198497]: 2025-12-09 16:17:28.946092062 +0000 UTC m=+0.141763143 container attach e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Dec 09 16:17:29 compute-0 sudo[198670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkfosfsgzjikzjjytzqunisrpqetokba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297048.9445753-676-26520444799115/AnsiballZ_file.py'
Dec 09 16:17:29 compute-0 sudo[198670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:29 compute-0 sharp_edison[198537]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:17:29 compute-0 sharp_edison[198537]: --> All data devices are unavailable
Dec 09 16:17:29 compute-0 python3.9[198674]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:29 compute-0 systemd[1]: libpod-e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08.scope: Deactivated successfully.
Dec 09 16:17:29 compute-0 sudo[198670]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:29 compute-0 podman[198497]: 2025-12-09 16:17:29.416662964 +0000 UTC m=+0.612334025 container died e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:17:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6a901fc5cf092442e0a0eb10021692206e97afa8f1893c38f06b0024ea6e8b-merged.mount: Deactivated successfully.
Dec 09 16:17:29 compute-0 podman[198497]: 2025-12-09 16:17:29.462570916 +0000 UTC m=+0.658241967 container remove e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:17:29 compute-0 systemd[1]: libpod-conmon-e62f2339c707a3f417e3cdb061a5df3181093f51626a0ed69a3d11626708bc08.scope: Deactivated successfully.
Dec 09 16:17:29 compute-0 ceph-mon[75222]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:29 compute-0 sudo[198290]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:29 compute-0 sudo[198720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:17:29 compute-0 sudo[198720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:29 compute-0 sudo[198720]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:29 compute-0 sudo[198768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:17:29 compute-0 sudo[198768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:29 compute-0 sudo[198897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baafozcuxrmupshvqscrrhxleiywsqcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297049.579454-676-58605578516254/AnsiballZ_file.py'
Dec 09 16:17:29 compute-0 sudo[198897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:29 compute-0 podman[198909]: 2025-12-09 16:17:29.924705509 +0000 UTC m=+0.036744025 container create fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:17:29 compute-0 systemd[1]: Started libpod-conmon-fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8.scope.
Dec 09 16:17:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:17:29 compute-0 podman[198909]: 2025-12-09 16:17:29.994844624 +0000 UTC m=+0.106883170 container init fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:17:30 compute-0 podman[198909]: 2025-12-09 16:17:29.907461063 +0000 UTC m=+0.019499599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:17:30 compute-0 podman[198909]: 2025-12-09 16:17:30.004885887 +0000 UTC m=+0.116924403 container start fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:17:30 compute-0 podman[198909]: 2025-12-09 16:17:30.007805779 +0000 UTC m=+0.119844315 container attach fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:17:30 compute-0 objective_elgamal[198926]: 167 167
Dec 09 16:17:30 compute-0 systemd[1]: libpod-fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8.scope: Deactivated successfully.
Dec 09 16:17:30 compute-0 podman[198909]: 2025-12-09 16:17:30.011269826 +0000 UTC m=+0.123308342 container died fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c957e56a886bbca5f80a2b38afefbc98c22a1d4a5cecd4b79b7efa3a2c9e96f-merged.mount: Deactivated successfully.
Dec 09 16:17:30 compute-0 python3.9[198904]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:30 compute-0 podman[198909]: 2025-12-09 16:17:30.049015539 +0000 UTC m=+0.161054065 container remove fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:17:30 compute-0 sudo[198897]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:30 compute-0 systemd[1]: libpod-conmon-fa9e6733a20dc271f2e45d20c468cb34eec94983cfbbb63a30de4a300e0ceca8.scope: Deactivated successfully.
Dec 09 16:17:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:30 compute-0 podman[198974]: 2025-12-09 16:17:30.201269077 +0000 UTC m=+0.039217846 container create 2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:17:30 compute-0 systemd[1]: Started libpod-conmon-2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a.scope.
Dec 09 16:17:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65de46a309928429f02a98eb2d66e534ac2c01114e2ff616674532d00bc16d42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65de46a309928429f02a98eb2d66e534ac2c01114e2ff616674532d00bc16d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65de46a309928429f02a98eb2d66e534ac2c01114e2ff616674532d00bc16d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65de46a309928429f02a98eb2d66e534ac2c01114e2ff616674532d00bc16d42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:30 compute-0 podman[198974]: 2025-12-09 16:17:30.183491616 +0000 UTC m=+0.021440405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:17:30 compute-0 podman[198974]: 2025-12-09 16:17:30.293587056 +0000 UTC m=+0.131535875 container init 2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:17:30 compute-0 podman[198974]: 2025-12-09 16:17:30.306632954 +0000 UTC m=+0.144581743 container start 2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:17:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:30 compute-0 podman[198974]: 2025-12-09 16:17:30.310833922 +0000 UTC m=+0.148782711 container attach 2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:17:30 compute-0 sudo[199122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytfktbhbvqcsddaqyqerammyblbhyrbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297050.2067642-676-135674427352133/AnsiballZ_file.py'
Dec 09 16:17:30 compute-0 sudo[199122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:30 compute-0 gracious_noether[199019]: {
Dec 09 16:17:30 compute-0 gracious_noether[199019]:     "0": [
Dec 09 16:17:30 compute-0 gracious_noether[199019]:         {
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "devices": [
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "/dev/loop3"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             ],
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_name": "ceph_lv0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_size": "21470642176",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "name": "ceph_lv0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "tags": {
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cluster_name": "ceph",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.crush_device_class": "",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.encrypted": "0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.objectstore": "bluestore",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osd_id": "0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.type": "block",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.vdo": "0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.with_tpm": "0"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             },
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "type": "block",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "vg_name": "ceph_vg0"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:         }
Dec 09 16:17:30 compute-0 gracious_noether[199019]:     ],
Dec 09 16:17:30 compute-0 gracious_noether[199019]:     "1": [
Dec 09 16:17:30 compute-0 gracious_noether[199019]:         {
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "devices": [
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "/dev/loop4"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             ],
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_name": "ceph_lv1",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_size": "21470642176",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "name": "ceph_lv1",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "tags": {
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cluster_name": "ceph",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.crush_device_class": "",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.encrypted": "0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.objectstore": "bluestore",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osd_id": "1",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.type": "block",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.vdo": "0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.with_tpm": "0"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             },
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "type": "block",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "vg_name": "ceph_vg1"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:         }
Dec 09 16:17:30 compute-0 gracious_noether[199019]:     ],
Dec 09 16:17:30 compute-0 gracious_noether[199019]:     "2": [
Dec 09 16:17:30 compute-0 gracious_noether[199019]:         {
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "devices": [
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "/dev/loop5"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             ],
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_name": "ceph_lv2",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_size": "21470642176",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "name": "ceph_lv2",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "tags": {
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.cluster_name": "ceph",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.crush_device_class": "",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.encrypted": "0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.objectstore": "bluestore",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osd_id": "2",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.type": "block",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.vdo": "0",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:                 "ceph.with_tpm": "0"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             },
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "type": "block",
Dec 09 16:17:30 compute-0 gracious_noether[199019]:             "vg_name": "ceph_vg2"
Dec 09 16:17:30 compute-0 gracious_noether[199019]:         }
Dec 09 16:17:30 compute-0 gracious_noether[199019]:     ]
Dec 09 16:17:30 compute-0 gracious_noether[199019]: }
Dec 09 16:17:30 compute-0 systemd[1]: libpod-2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a.scope: Deactivated successfully.
Dec 09 16:17:30 compute-0 podman[198974]: 2025-12-09 16:17:30.676990443 +0000 UTC m=+0.514939242 container died 2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-65de46a309928429f02a98eb2d66e534ac2c01114e2ff616674532d00bc16d42-merged.mount: Deactivated successfully.
Dec 09 16:17:30 compute-0 podman[198974]: 2025-12-09 16:17:30.738849865 +0000 UTC m=+0.576798644 container remove 2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:17:30 compute-0 systemd[1]: libpod-conmon-2e7d37c5dce3664d88f631a5496cbfbf28c89550b9dbee514ed9e8cbec85238a.scope: Deactivated successfully.
Dec 09 16:17:30 compute-0 sudo[198768]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:30 compute-0 python3.9[199125]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:30 compute-0 sudo[199122]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:30 compute-0 sudo[199139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:17:30 compute-0 sudo[199139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:30 compute-0 sudo[199139]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:30 compute-0 sudo[199167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:17:30 compute-0 sudo[199167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:31 compute-0 podman[199303]: 2025-12-09 16:17:31.243348551 +0000 UTC m=+0.045572984 container create aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:17:31 compute-0 systemd[1]: Started libpod-conmon-aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f.scope.
Dec 09 16:17:31 compute-0 podman[199303]: 2025-12-09 16:17:31.226091855 +0000 UTC m=+0.028316308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:17:31 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:17:31 compute-0 sudo[199369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvwaegvpyveoftxqzafswqctwjlguucm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297051.0060656-676-19530874077418/AnsiballZ_file.py'
Dec 09 16:17:31 compute-0 sudo[199369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:31 compute-0 podman[199303]: 2025-12-09 16:17:31.335785104 +0000 UTC m=+0.138009547 container init aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 09 16:17:31 compute-0 podman[199303]: 2025-12-09 16:17:31.343041239 +0000 UTC m=+0.145265662 container start aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:17:31 compute-0 podman[199303]: 2025-12-09 16:17:31.346159206 +0000 UTC m=+0.148383629 container attach aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:17:31 compute-0 awesome_noether[199362]: 167 167
Dec 09 16:17:31 compute-0 systemd[1]: libpod-aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f.scope: Deactivated successfully.
Dec 09 16:17:31 compute-0 podman[199303]: 2025-12-09 16:17:31.348593765 +0000 UTC m=+0.150818188 container died aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_noether, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e837a0be0eb00be1790e3eca6b9b89380b687a4ec2f467d84bc683aafde4a25-merged.mount: Deactivated successfully.
Dec 09 16:17:31 compute-0 podman[199303]: 2025-12-09 16:17:31.380846973 +0000 UTC m=+0.183071396 container remove aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_noether, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec 09 16:17:31 compute-0 systemd[1]: libpod-conmon-aa92bb2132bf77a11a5c9f910d069085f13062b58967dc46cc36857affa69b0f.scope: Deactivated successfully.
Dec 09 16:17:31 compute-0 ceph-mon[75222]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:31 compute-0 python3.9[199371]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:31 compute-0 sudo[199369]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:31 compute-0 podman[199394]: 2025-12-09 16:17:31.553111164 +0000 UTC m=+0.046348136 container create b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:17:31 compute-0 systemd[1]: Started libpod-conmon-b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042.scope.
Dec 09 16:17:31 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:17:31 compute-0 podman[199394]: 2025-12-09 16:17:31.535492498 +0000 UTC m=+0.028729470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709d31411ec5b3a357d898351c5841f84b032615d90e6e3adb3334aa34dd611f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709d31411ec5b3a357d898351c5841f84b032615d90e6e3adb3334aa34dd611f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709d31411ec5b3a357d898351c5841f84b032615d90e6e3adb3334aa34dd611f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709d31411ec5b3a357d898351c5841f84b032615d90e6e3adb3334aa34dd611f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:17:31 compute-0 podman[199394]: 2025-12-09 16:17:31.644410405 +0000 UTC m=+0.137647397 container init b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:17:31 compute-0 podman[199394]: 2025-12-09 16:17:31.653655395 +0000 UTC m=+0.146892347 container start b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:17:31 compute-0 podman[199394]: 2025-12-09 16:17:31.657155584 +0000 UTC m=+0.150392546 container attach b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:17:31 compute-0 sudo[199574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkinllhkpbnislafvfodaeilrkapucia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297051.7170603-676-107018147802558/AnsiballZ_file.py'
Dec 09 16:17:31 compute-0 sudo[199574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:32 compute-0 python3.9[199576]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:32 compute-0 sudo[199574]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:32 compute-0 lvm[199706]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:17:32 compute-0 lvm[199706]: VG ceph_vg0 finished
Dec 09 16:17:32 compute-0 lvm[199715]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:17:32 compute-0 lvm[199715]: VG ceph_vg1 finished
Dec 09 16:17:32 compute-0 lvm[199719]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:17:32 compute-0 lvm[199719]: VG ceph_vg2 finished
Dec 09 16:17:32 compute-0 lvm[199738]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:17:32 compute-0 lvm[199738]: VG ceph_vg1 finished
Dec 09 16:17:32 compute-0 lvm[199744]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:17:32 compute-0 lvm[199744]: VG ceph_vg2 finished
Dec 09 16:17:32 compute-0 lvm[199747]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:17:32 compute-0 lvm[199747]: VG ceph_vg1 finished
Dec 09 16:17:32 compute-0 bold_mcclintock[199433]: {}
Dec 09 16:17:32 compute-0 lvm[199748]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:17:32 compute-0 lvm[199748]: VG ceph_vg2 finished
Dec 09 16:17:32 compute-0 systemd[1]: libpod-b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042.scope: Deactivated successfully.
Dec 09 16:17:32 compute-0 systemd[1]: libpod-b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042.scope: Consumed 1.401s CPU time.
Dec 09 16:17:32 compute-0 podman[199394]: 2025-12-09 16:17:32.535460107 +0000 UTC m=+1.028697049 container died b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-709d31411ec5b3a357d898351c5841f84b032615d90e6e3adb3334aa34dd611f-merged.mount: Deactivated successfully.
Dec 09 16:17:32 compute-0 podman[199394]: 2025-12-09 16:17:32.590431485 +0000 UTC m=+1.083668427 container remove b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:17:32 compute-0 systemd[1]: libpod-conmon-b1610703c39574c573d32eb84af95c722cba0ae07ffe52cf6857dd8a8feb5042.scope: Deactivated successfully.
Dec 09 16:17:32 compute-0 sudo[199167]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:17:32 compute-0 sudo[199810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmesyerurqbegnbglaucxcygkwyztkzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297052.3190856-676-186862276053893/AnsiballZ_file.py'
Dec 09 16:17:32 compute-0 sudo[199810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:17:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:17:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:17:32 compute-0 sudo[199813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:17:32 compute-0 sudo[199813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:17:32 compute-0 sudo[199813]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:32 compute-0 python3.9[199812]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:32 compute-0 sudo[199810]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:33 compute-0 sudo[199989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjijggkadhagetzgyadnycpzuhrwjggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297053.1962047-775-160799220366109/AnsiballZ_stat.py'
Dec 09 16:17:33 compute-0 sudo[199989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:33 compute-0 ceph-mon[75222]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:17:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:17:33 compute-0 sshd-session[199917]: Invalid user odoo from 146.190.31.45 port 49670
Dec 09 16:17:33 compute-0 python3.9[199991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:33 compute-0 sudo[199989]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:33 compute-0 sshd-session[199917]: Connection closed by invalid user odoo 146.190.31.45 port 49670 [preauth]
Dec 09 16:17:34 compute-0 sudo[200112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpgrcpkwhtitgsxctrhbfectgestqsbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297053.1962047-775-160799220366109/AnsiballZ_copy.py'
Dec 09 16:17:34 compute-0 sudo[200112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:34 compute-0 python3.9[200114]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297053.1962047-775-160799220366109/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:34 compute-0 sudo[200112]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:34 compute-0 sudo[200264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htubhkpneximmmwjsyodbocszeusladf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297054.5484803-775-168247601079932/AnsiballZ_stat.py'
Dec 09 16:17:34 compute-0 sudo[200264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:35 compute-0 python3.9[200266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:35 compute-0 sudo[200264]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:35 compute-0 sudo[200387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkcuwbyiihmgmykwmtslqbagiaotreim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297054.5484803-775-168247601079932/AnsiballZ_copy.py'
Dec 09 16:17:35 compute-0 sudo[200387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:35 compute-0 ceph-mon[75222]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:35 compute-0 python3.9[200389]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297054.5484803-775-168247601079932/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:35 compute-0 sudo[200387]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:36 compute-0 sudo[200539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nstwyacxuovpttrylhtlxacqmqvrseyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297055.9535463-775-75685068892239/AnsiballZ_stat.py'
Dec 09 16:17:36 compute-0 sudo[200539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:36 compute-0 python3.9[200541]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:36 compute-0 sudo[200539]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:17:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:17:36 compute-0 sudo[200662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmcgiyapbjpwullzglqqvsuorgfscoho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297055.9535463-775-75685068892239/AnsiballZ_copy.py'
Dec 09 16:17:36 compute-0 sudo[200662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:37 compute-0 python3.9[200664]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297055.9535463-775-75685068892239/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:37 compute-0 sudo[200662]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:37 compute-0 sudo[200814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffaqtyrxwllcwnofhhpjuvrwztetpiog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297057.1975765-775-17184103715261/AnsiballZ_stat.py'
Dec 09 16:17:37 compute-0 sudo[200814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:37 compute-0 ceph-mon[75222]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:37 compute-0 python3.9[200816]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:37 compute-0 sudo[200814]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:38 compute-0 sudo[200937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siyscdwgsewtjfgwguqxtvudddeewtrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297057.1975765-775-17184103715261/AnsiballZ_copy.py'
Dec 09 16:17:38 compute-0 sudo[200937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:38 compute-0 python3.9[200939]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297057.1975765-775-17184103715261/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:38 compute-0 sudo[200937]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:38 compute-0 sudo[201089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njiwzddgmulfchnkzbukzwtjcieixoyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297058.6089787-775-23955228282920/AnsiballZ_stat.py'
Dec 09 16:17:38 compute-0 sudo[201089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:39 compute-0 python3.9[201091]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:39 compute-0 sudo[201089]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:39 compute-0 sudo[201212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auprvlyumminhmomzpikdnluhggnvlin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297058.6089787-775-23955228282920/AnsiballZ_copy.py'
Dec 09 16:17:39 compute-0 sudo[201212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:39 compute-0 ceph-mon[75222]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:39 compute-0 python3.9[201214]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297058.6089787-775-23955228282920/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:39 compute-0 sudo[201212]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:39 compute-0 auditd[700]: Audit daemon rotating log files
Dec 09 16:17:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:40 compute-0 sudo[201364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlbhnqqvspyopajjtggpoqxpkwrxdkkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297059.9954247-775-265848693587869/AnsiballZ_stat.py'
Dec 09 16:17:40 compute-0 sudo[201364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:40 compute-0 python3.9[201366]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:40 compute-0 sudo[201364]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:40 compute-0 sudo[201487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjcfecxrdrvpjnwlxttbiiehuebzuxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297059.9954247-775-265848693587869/AnsiballZ_copy.py'
Dec 09 16:17:40 compute-0 sudo[201487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:41 compute-0 python3.9[201489]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297059.9954247-775-265848693587869/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:41 compute-0 sudo[201487]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:41 compute-0 sudo[201639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjbbciutlaiyvhvucfjawcyelfwqpgds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297061.25304-775-160692400850240/AnsiballZ_stat.py'
Dec 09 16:17:41 compute-0 sudo[201639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:41 compute-0 ceph-mon[75222]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:41 compute-0 python3.9[201641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:41 compute-0 sudo[201639]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:42 compute-0 sudo[201762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-optufzzfdhszegtndoojcklcmvtlnjsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297061.25304-775-160692400850240/AnsiballZ_copy.py'
Dec 09 16:17:42 compute-0 sudo[201762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:42 compute-0 python3.9[201764]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297061.25304-775-160692400850240/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:42 compute-0 sudo[201762]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:42 compute-0 sudo[201914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iefdaemhpdkzstickuqvphdamggwcphv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297062.4720132-775-246408238193005/AnsiballZ_stat.py'
Dec 09 16:17:42 compute-0 sudo[201914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:42 compute-0 python3.9[201916]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:42 compute-0 sudo[201914]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:43 compute-0 sudo[202037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmlxitbnwrdswlnrxrqawabgzgujuqzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297062.4720132-775-246408238193005/AnsiballZ_copy.py'
Dec 09 16:17:43 compute-0 sudo[202037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:43 compute-0 python3.9[202039]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297062.4720132-775-246408238193005/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:43 compute-0 sudo[202037]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:43 compute-0 ceph-mon[75222]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:44 compute-0 sudo[202189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcyxgfwhrnqrcsdxnwtdhjgapidfolmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297063.767494-775-147351363351926/AnsiballZ_stat.py'
Dec 09 16:17:44 compute-0 sudo[202189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:44 compute-0 python3.9[202191]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:44 compute-0 sudo[202189]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:44 compute-0 sudo[202312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjahcaoyakuttjtishcfbbtltoqisiwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297063.767494-775-147351363351926/AnsiballZ_copy.py'
Dec 09 16:17:44 compute-0 sudo[202312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:44 compute-0 ceph-mon[75222]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:45 compute-0 python3.9[202314]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297063.767494-775-147351363351926/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:45 compute-0 sudo[202312]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:45 compute-0 sudo[202464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-refrhnpvoprjnjbayqddxztzeilezkia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297065.1948824-775-159813318342037/AnsiballZ_stat.py'
Dec 09 16:17:45 compute-0 sudo[202464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:45 compute-0 python3.9[202466]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:45 compute-0 sudo[202464]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:46 compute-0 sudo[202600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akoywacdghqolizfmpzbopvfyaayikna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297065.1948824-775-159813318342037/AnsiballZ_copy.py'
Dec 09 16:17:46 compute-0 sudo[202600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:46 compute-0 podman[202561]: 2025-12-09 16:17:46.162844435 +0000 UTC m=+0.119390603 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:17:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:46 compute-0 python3.9[202607]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297065.1948824-775-159813318342037/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:46 compute-0 sudo[202600]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:46 compute-0 sudo[202763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfytwrqtzvxjdopykbkxjbtmvjnhjabx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297066.4807882-775-266119099792558/AnsiballZ_stat.py'
Dec 09 16:17:46 compute-0 sudo[202763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:47 compute-0 python3.9[202765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:47 compute-0 sudo[202763]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:47 compute-0 ceph-mon[75222]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:47 compute-0 sudo[202886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhomzknosipipxjrxfwwrgfsygdblff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297066.4807882-775-266119099792558/AnsiballZ_copy.py'
Dec 09 16:17:47 compute-0 sudo[202886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:47 compute-0 python3.9[202888]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297066.4807882-775-266119099792558/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:47 compute-0 sudo[202886]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:48 compute-0 sudo[203048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqwtxasyihenlnagwcfqubtusswylcvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297067.9932764-775-224774748826748/AnsiballZ_stat.py'
Dec 09 16:17:48 compute-0 sudo[203048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:48 compute-0 podman[203012]: 2025-12-09 16:17:48.369713979 +0000 UTC m=+0.080762525 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:17:48 compute-0 python3.9[203055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:48 compute-0 sudo[203048]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:48 compute-0 sudo[203178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lklgipdbdunridpsloosiepywazncigj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297067.9932764-775-224774748826748/AnsiballZ_copy.py'
Dec 09 16:17:48 compute-0 sudo[203178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:49 compute-0 python3.9[203180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297067.9932764-775-224774748826748/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:49 compute-0 sudo[203178]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:49 compute-0 ceph-mon[75222]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:49 compute-0 sudo[203330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aopopccuvxzargbcfbryporhfojsycty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297069.3010433-775-164570464285352/AnsiballZ_stat.py'
Dec 09 16:17:49 compute-0 sudo[203330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:49 compute-0 python3.9[203332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:49 compute-0 sudo[203330]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:50 compute-0 sudo[203453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piljseqgbgavgxvlgpmhrjapizehuwce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297069.3010433-775-164570464285352/AnsiballZ_copy.py'
Dec 09 16:17:50 compute-0 sudo[203453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:50 compute-0 python3.9[203455]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297069.3010433-775-164570464285352/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:50 compute-0 sudo[203453]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:50 compute-0 sudo[203605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbvhdofjpzorgjmboxazanhcedvhamio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297070.5353818-775-175706488325980/AnsiballZ_stat.py'
Dec 09 16:17:50 compute-0 sudo[203605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:51 compute-0 python3.9[203607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:17:51 compute-0 sudo[203605]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:51 compute-0 ceph-mon[75222]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:51 compute-0 sudo[203728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgnlceecztsbgkulsefllffdlnfvskjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297070.5353818-775-175706488325980/AnsiballZ_copy.py'
Dec 09 16:17:51 compute-0 sudo[203728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:51 compute-0 python3.9[203730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297070.5353818-775-175706488325980/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:51 compute-0 sudo[203728]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:52 compute-0 python3.9[203880]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:17:52 compute-0 sudo[204033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgtcqgkvcjkmujeuzzqwnosupicwjyuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297072.4769483-981-176802322369001/AnsiballZ_seboolean.py'
Dec 09 16:17:52 compute-0 sudo[204033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:53 compute-0 python3.9[204035]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 09 16:17:53 compute-0 ceph-mon[75222]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:54 compute-0 sudo[204033]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:54 compute-0 sudo[204189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdrizbubvjiciocjgafcexhutuqmnfbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297074.5452108-989-55812854350842/AnsiballZ_copy.py'
Dec 09 16:17:54 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 09 16:17:54 compute-0 sudo[204189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:55 compute-0 python3.9[204191]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:55 compute-0 sudo[204189]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:55 compute-0 ceph-mon[75222]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:17:55 compute-0 sudo[204341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seggnyvgetcodicgiczxyetufwnwombo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297075.2111728-989-262168392734357/AnsiballZ_copy.py'
Dec 09 16:17:55 compute-0 sudo[204341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:55 compute-0 python3.9[204343]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:55 compute-0 sudo[204341]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:56 compute-0 sudo[204493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prvcdgscmlvbxksmdhymnkzxkqzwqbre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297075.9924088-989-173140335659305/AnsiballZ_copy.py'
Dec 09 16:17:56 compute-0 sudo[204493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:17:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:17:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:17:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:17:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:17:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:17:56 compute-0 python3.9[204495]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:56 compute-0 sudo[204493]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:57 compute-0 sudo[204645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dicoapfvrpfhxreeyvgzsothcgwnsquu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297076.7224572-989-210737701347888/AnsiballZ_copy.py'
Dec 09 16:17:57 compute-0 sudo[204645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:57 compute-0 python3.9[204647]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:57 compute-0 sudo[204645]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:57 compute-0 ceph-mon[75222]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:57 compute-0 sudo[204797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyltwqfhjagsundtkkdwnzpagtjyhrjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297077.448837-989-9754559438912/AnsiballZ_copy.py'
Dec 09 16:17:57 compute-0 sudo[204797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:57 compute-0 python3.9[204799]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:57 compute-0 sudo[204797]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:58 compute-0 sudo[204949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcaqrkuublrzzfedmzwqjczvqkpsbzjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297078.1596-1025-7999376544867/AnsiballZ_copy.py'
Dec 09 16:17:58 compute-0 sudo[204949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:58 compute-0 python3.9[204951]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:58 compute-0 sudo[204949]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:59 compute-0 sudo[205101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslhnisyejwirhgscfxcwnyiqjteiegv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297078.8266618-1025-138744131936280/AnsiballZ_copy.py'
Dec 09 16:17:59 compute-0 sudo[205101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:59 compute-0 python3.9[205103]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:59 compute-0 sudo[205101]: pam_unix(sudo:session): session closed for user root
Dec 09 16:17:59 compute-0 ceph-mon[75222]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:17:59 compute-0 sudo[205253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvynxmmjtgvnpvisuwzlyyovktxaxtlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297079.4702694-1025-179127371694605/AnsiballZ_copy.py'
Dec 09 16:17:59 compute-0 sudo[205253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:17:59 compute-0 python3.9[205255]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:17:59 compute-0 sudo[205253]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:00 compute-0 sudo[205405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxrvllzyqhccontwnuhsccpmkefmunwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297080.1671882-1025-44633852200691/AnsiballZ_copy.py'
Dec 09 16:18:00 compute-0 sudo[205405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:00 compute-0 python3.9[205407]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:00 compute-0 sudo[205405]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:01 compute-0 sudo[205557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqqikrdsqgdmgpmnzrmaynjcautqrigv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297080.8222039-1025-134124494725348/AnsiballZ_copy.py'
Dec 09 16:18:01 compute-0 sudo[205557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:01 compute-0 ceph-mon[75222]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:01 compute-0 python3.9[205559]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:01 compute-0 sudo[205557]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:01 compute-0 sudo[205709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njixbpwhgarqbqofsxlsyqmszgsicezx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297081.618693-1061-607288271913/AnsiballZ_systemd.py'
Dec 09 16:18:01 compute-0 sudo[205709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:02 compute-0 python3.9[205711]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:18:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:02 compute-0 systemd[1]: Reloading.
Dec 09 16:18:02 compute-0 systemd-rc-local-generator[205739]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:18:02 compute-0 systemd-sysv-generator[205742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:18:02 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 09 16:18:02 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 09 16:18:02 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 09 16:18:02 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 09 16:18:02 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 09 16:18:02 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 09 16:18:02 compute-0 sudo[205709]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:03 compute-0 sudo[205902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rutfnclbqlobapjiagmkhinruiufhybr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297082.8425071-1061-89906055965994/AnsiballZ_systemd.py'
Dec 09 16:18:03 compute-0 sudo[205902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:03 compute-0 ceph-mon[75222]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:03 compute-0 python3.9[205904]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:18:03 compute-0 systemd[1]: Reloading.
Dec 09 16:18:03 compute-0 systemd-rc-local-generator[205933]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:18:03 compute-0 systemd-sysv-generator[205937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:18:03 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 09 16:18:03 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 09 16:18:03 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 09 16:18:03 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 09 16:18:03 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 09 16:18:03 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 09 16:18:03 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 09 16:18:03 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 09 16:18:03 compute-0 sudo[205902]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:04 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 09 16:18:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:04 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 09 16:18:04 compute-0 sudo[206122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmnollsafltpaaxaugtkdaafoezedmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297084.1078515-1061-116186970081527/AnsiballZ_systemd.py'
Dec 09 16:18:04 compute-0 sudo[206122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:04 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 09 16:18:04 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 09 16:18:04 compute-0 python3.9[206128]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:18:04 compute-0 systemd[1]: Reloading.
Dec 09 16:18:04 compute-0 systemd-sysv-generator[206156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:18:04 compute-0 systemd-rc-local-generator[206153]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:18:05 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 09 16:18:05 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 09 16:18:05 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 09 16:18:05 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 09 16:18:05 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 09 16:18:05 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 09 16:18:05 compute-0 sudo[206122]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:05 compute-0 ceph-mon[75222]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:05 compute-0 setroubleshoot[205994]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3efc3e46-5039-43a0-987b-f6eb52a2a047
Dec 09 16:18:05 compute-0 setroubleshoot[205994]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 09 16:18:05 compute-0 setroubleshoot[205994]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3efc3e46-5039-43a0-987b-f6eb52a2a047
Dec 09 16:18:05 compute-0 setroubleshoot[205994]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 09 16:18:05 compute-0 sudo[206341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heelpigjhbvhaxibjohjzcottzrtwqrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297085.4743416-1061-18430664972739/AnsiballZ_systemd.py'
Dec 09 16:18:05 compute-0 sudo[206341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:06 compute-0 python3.9[206343]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:18:06 compute-0 systemd[1]: Reloading.
Dec 09 16:18:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:06 compute-0 systemd-sysv-generator[206374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:18:06 compute-0 systemd-rc-local-generator[206371]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:18:06 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 09 16:18:06 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 09 16:18:06 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 09 16:18:06 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 09 16:18:06 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 09 16:18:06 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 09 16:18:06 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 09 16:18:06 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 09 16:18:06 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 09 16:18:06 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 09 16:18:06 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 09 16:18:06 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 09 16:18:06 compute-0 sudo[206341]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:07 compute-0 sudo[206556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqlrppcifravemrxclzsgswlzedrytpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297086.7149215-1061-262087109472159/AnsiballZ_systemd.py'
Dec 09 16:18:07 compute-0 sudo[206556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:07 compute-0 python3.9[206558]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:18:07 compute-0 systemd[1]: Reloading.
Dec 09 16:18:07 compute-0 ceph-mon[75222]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:07 compute-0 systemd-rc-local-generator[206585]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:18:07 compute-0 systemd-sysv-generator[206588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:18:07 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 09 16:18:07 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 09 16:18:07 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 09 16:18:07 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 09 16:18:07 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 09 16:18:07 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 09 16:18:07 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 09 16:18:07 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 09 16:18:07 compute-0 sudo[206556]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:08 compute-0 sudo[206768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggkudltegrshlhftaoaooptbjjleajjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297088.055852-1098-99730636987502/AnsiballZ_file.py'
Dec 09 16:18:08 compute-0 sudo[206768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:08 compute-0 python3.9[206770]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:08 compute-0 sudo[206768]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:09 compute-0 sudo[206920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbpbtrribgsfdzjpsjuhllocmitkhair ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297088.7774632-1106-99070427622705/AnsiballZ_find.py'
Dec 09 16:18:09 compute-0 sudo[206920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:09 compute-0 python3.9[206922]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 16:18:09 compute-0 sudo[206920]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:09 compute-0 ceph-mon[75222]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:09 compute-0 sudo[207072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnbyxxdmhcvhkjaaqktbxkbaalovyhxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297089.5619411-1114-137479220253875/AnsiballZ_command.py'
Dec 09 16:18:09 compute-0 sudo[207072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:10 compute-0 python3.9[207074]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:18:10 compute-0 sudo[207072]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:10 compute-0 python3.9[207228]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 16:18:11 compute-0 ceph-mon[75222]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:11 compute-0 python3.9[207378]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:12 compute-0 python3.9[207499]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297091.3778214-1133-98463366713772/.source.xml follow=False _original_basename=secret.xml.j2 checksum=010e4f70fb54304fab7fe96aa1b1f11e7ca56e16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:12 compute-0 sudo[207649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exlqzawjdjivujoysszhflpkkqcwxtjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297092.682298-1148-166287287581596/AnsiballZ_command.py'
Dec 09 16:18:12 compute-0 sudo[207649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:13 compute-0 python3.9[207651]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 67f67f44-54fc-54ea-8df0-10931b6ecdaf
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:18:13 compute-0 polkitd[43504]: Registered Authentication Agent for unix-process:207653:520082 (system bus name :1.2549 [pkttyagent --process 207653 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 09 16:18:13 compute-0 polkitd[43504]: Unregistered Authentication Agent for unix-process:207653:520082 (system bus name :1.2549, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 09 16:18:13 compute-0 polkitd[43504]: Registered Authentication Agent for unix-process:207652:520081 (system bus name :1.2550 [pkttyagent --process 207652 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 09 16:18:13 compute-0 polkitd[43504]: Unregistered Authentication Agent for unix-process:207652:520081 (system bus name :1.2550, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 09 16:18:13 compute-0 sudo[207649]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:13 compute-0 ceph-mon[75222]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:14 compute-0 python3.9[207813]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:14 compute-0 sudo[207963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyyzrvhfambebcglnyxdyvyjlzsvvkct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297094.3336623-1164-193105407705745/AnsiballZ_command.py'
Dec 09 16:18:14 compute-0 sudo[207963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:14 compute-0 sudo[207963]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:15 compute-0 ceph-mon[75222]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:15 compute-0 sudo[208116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xihydtufzsjeneceedenjzwdlcpgwept ; FSID=67f67f44-54fc-54ea-8df0-10931b6ecdaf KEY=AQANSDhpAAAAABAAVF61qpb1VLD1uojiB4Gqaw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297095.1218808-1172-218065377689230/AnsiballZ_command.py'
Dec 09 16:18:15 compute-0 sudo[208116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:15 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 09 16:18:15 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 09 16:18:15 compute-0 polkitd[43504]: Registered Authentication Agent for unix-process:208119:520329 (system bus name :1.2553 [pkttyagent --process 208119 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 09 16:18:15 compute-0 polkitd[43504]: Unregistered Authentication Agent for unix-process:208119:520329 (system bus name :1.2553, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 09 16:18:15 compute-0 sudo[208116]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:16 compute-0 sudo[208288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urqladcogghvstmtndgccqppsvbmgtgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297095.958769-1180-221269622261884/AnsiballZ_copy.py'
Dec 09 16:18:16 compute-0 sudo[208288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:16 compute-0 sshd-session[208201]: Invalid user odoo from 146.190.31.45 port 57210
Dec 09 16:18:16 compute-0 podman[208250]: 2025-12-09 16:18:16.465675754 +0000 UTC m=+0.127207652 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 16:18:16 compute-0 sshd-session[208201]: Connection closed by invalid user odoo 146.190.31.45 port 57210 [preauth]
Dec 09 16:18:16 compute-0 python3.9[208296]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:16 compute-0 sudo[208288]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:17 compute-0 sudo[208455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbwoaenljpvpyjzosqsunciqcyhgkvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297096.8285925-1188-86843289456967/AnsiballZ_stat.py'
Dec 09 16:18:17 compute-0 sudo[208455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:17 compute-0 python3.9[208457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:17 compute-0 ceph-mon[75222]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:17 compute-0 sudo[208455]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:18:17.834 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:18:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:18:17.835 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:18:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:18:17.835 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:18:17 compute-0 sudo[208578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswhvcvnlqjktvhjjwqdkxkuzchiuzbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297096.8285925-1188-86843289456967/AnsiballZ_copy.py'
Dec 09 16:18:17 compute-0 sudo[208578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:18 compute-0 python3.9[208580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297096.8285925-1188-86843289456967/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:18 compute-0 sudo[208578]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:18 compute-0 podman[208680]: 2025-12-09 16:18:18.654511451 +0000 UTC m=+0.085258002 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:18:18 compute-0 sudo[208749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqwlbmjgmckdxtokxfzybyfsrpmrebab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297098.4125671-1204-249861296776344/AnsiballZ_file.py'
Dec 09 16:18:18 compute-0 sudo[208749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:18 compute-0 python3.9[208751]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:18 compute-0 sudo[208749]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:19 compute-0 ceph-mon[75222]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:19 compute-0 sudo[208901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjlevabevoyrselqdjzrbcezuhpqedps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297099.1212056-1212-42747990115005/AnsiballZ_stat.py'
Dec 09 16:18:19 compute-0 sudo[208901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:19 compute-0 python3.9[208903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:19 compute-0 sudo[208901]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:19 compute-0 sudo[208979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kounqrghcwdlkyacdhowqdqnoknujqfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297099.1212056-1212-42747990115005/AnsiballZ_file.py'
Dec 09 16:18:19 compute-0 sudo[208979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:20 compute-0 python3.9[208981]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:20 compute-0 sudo[208979]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:20 compute-0 sudo[209131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwppbdeoffjhhxbwyxhkafslhufwqzse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297100.3507962-1224-83231087823999/AnsiballZ_stat.py'
Dec 09 16:18:20 compute-0 sudo[209131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:20 compute-0 python3.9[209133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:20 compute-0 sudo[209131]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:21 compute-0 sudo[209209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaabttzmucplkibevxzqxlhqmzcvzpji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297100.3507962-1224-83231087823999/AnsiballZ_file.py'
Dec 09 16:18:21 compute-0 sudo[209209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:21 compute-0 python3.9[209211]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.iqjr2gka recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:21 compute-0 sudo[209209]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:21 compute-0 ceph-mon[75222]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:21 compute-0 sudo[209361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grgabtnctnzhisfotyjlsaxgalojoosf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297101.613553-1236-61868692677764/AnsiballZ_stat.py'
Dec 09 16:18:21 compute-0 sudo[209361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:22 compute-0 python3.9[209363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:22 compute-0 sudo[209361]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:22 compute-0 sudo[209439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzsjsqywfsnjjbhbrfwukfviqwiklouw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297101.613553-1236-61868692677764/AnsiballZ_file.py'
Dec 09 16:18:22 compute-0 sudo[209439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:22 compute-0 python3.9[209441]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:22 compute-0 sudo[209439]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:23 compute-0 sudo[209591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldfkrgaurdswfrvmebdlqahlejolrtim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297102.8872926-1249-62710603177770/AnsiballZ_command.py'
Dec 09 16:18:23 compute-0 sudo[209591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:23 compute-0 python3.9[209593]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:18:23 compute-0 sudo[209591]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:23 compute-0 ceph-mon[75222]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:24 compute-0 sudo[209744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plozbrfnxvzxxhsztvlqyhgxouyjxsfe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765297103.614143-1257-171990841462175/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 16:18:24 compute-0 sudo[209744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:24 compute-0 python3[209746]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 16:18:24 compute-0 sudo[209744]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:24 compute-0 sudo[209896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjlcjuatwnbyfsdpgjfhsfvmghjildwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297104.474022-1265-14123465259707/AnsiballZ_stat.py'
Dec 09 16:18:24 compute-0 sudo[209896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:25 compute-0 python3.9[209898]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:25 compute-0 sudo[209896]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:25 compute-0 sudo[209974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqnfuckgkokkiqkpvjodfvugoqxdzanp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297104.474022-1265-14123465259707/AnsiballZ_file.py'
Dec 09 16:18:25 compute-0 sudo[209974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:25 compute-0 python3.9[209976]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:25 compute-0 sudo[209974]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:25 compute-0 ceph-mon[75222]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:18:25
Dec 09 16:18:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:18:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:18:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.mgr', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.meta']
Dec 09 16:18:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:18:26 compute-0 sudo[210126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyzuebvnkqdsqvwpgevktbybfmdoghya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297105.6924486-1277-237798951276130/AnsiballZ_stat.py'
Dec 09 16:18:26 compute-0 sudo[210126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:26 compute-0 python3.9[210128]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:26 compute-0 sudo[210126]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:26 compute-0 sudo[210204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aimxygpltvmufodafwtmbrqyynqgpvjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297105.6924486-1277-237798951276130/AnsiballZ_file.py'
Dec 09 16:18:26 compute-0 sudo[210204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:18:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:18:26 compute-0 python3.9[210206]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:26 compute-0 sudo[210204]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:27 compute-0 sudo[210356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfyycqmenlyqmbrkqnipafgepuyjpvzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297106.9217095-1289-48622736632926/AnsiballZ_stat.py'
Dec 09 16:18:27 compute-0 sudo[210356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:27 compute-0 python3.9[210358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:27 compute-0 sudo[210356]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:27 compute-0 ceph-mon[75222]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:27 compute-0 sudo[210434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgwczdgcihueyrcpcyffmdgowdskvdsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297106.9217095-1289-48622736632926/AnsiballZ_file.py'
Dec 09 16:18:27 compute-0 sudo[210434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:28 compute-0 python3.9[210436]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:28 compute-0 sudo[210434]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:28 compute-0 sudo[210586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqvfrsnfzkxrpndmqmgkbsamdxtsltzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297108.2397857-1301-13792891821808/AnsiballZ_stat.py'
Dec 09 16:18:28 compute-0 sudo[210586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:28 compute-0 python3.9[210588]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:28 compute-0 sudo[210586]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:28 compute-0 sudo[210664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymkphyqfinadejhiomtzsumuqgesgisd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297108.2397857-1301-13792891821808/AnsiballZ_file.py'
Dec 09 16:18:28 compute-0 sudo[210664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:29 compute-0 python3.9[210666]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:29 compute-0 sudo[210664]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:29 compute-0 ceph-mon[75222]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:29 compute-0 sudo[210816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wljfldtavpjltcmzayvzzbrngtgpwmfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297109.404604-1313-151059765676256/AnsiballZ_stat.py'
Dec 09 16:18:29 compute-0 sudo[210816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:30 compute-0 python3.9[210818]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:30 compute-0 sudo[210816]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:30 compute-0 sudo[210942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edcrbcqetqvtxctkbulzmomjovgiaear ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297109.404604-1313-151059765676256/AnsiballZ_copy.py'
Dec 09 16:18:30 compute-0 sudo[210942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:30 compute-0 python3.9[210944]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765297109.404604-1313-151059765676256/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:30 compute-0 sudo[210942]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:31 compute-0 sudo[211094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bijaoultakgcizwrhkygoenhrlbqamst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297110.9934876-1328-79348893465761/AnsiballZ_file.py'
Dec 09 16:18:31 compute-0 sudo[211094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:31 compute-0 python3.9[211096]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:31 compute-0 sudo[211094]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:31 compute-0 ceph-mon[75222]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:31 compute-0 sudo[211246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmkosqkzmnerfpyrpdqsivexyipmtyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297111.6712217-1336-248081242203657/AnsiballZ_command.py'
Dec 09 16:18:31 compute-0 sudo[211246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:32 compute-0 python3.9[211248]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:18:32 compute-0 sudo[211246]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:32 compute-0 sudo[211374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:18:32 compute-0 sudo[211374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:32 compute-0 sudo[211374]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:32 compute-0 sudo[211426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pflrbkmzvjziuhtorpntvmfvqyahzfyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297112.3590612-1344-62580734244616/AnsiballZ_blockinfile.py'
Dec 09 16:18:32 compute-0 sudo[211426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:32 compute-0 sudo[211427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:18:32 compute-0 sudo[211427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:33 compute-0 python3.9[211440]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:33 compute-0 sudo[211426]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:33 compute-0 sudo[211427]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:18:33 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:18:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:18:33 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:18:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:18:33 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:18:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:18:33 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:18:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:18:33 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:18:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:18:33 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:18:33 compute-0 sudo[211584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:18:33 compute-0 sudo[211584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:33 compute-0 sudo[211584]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:33 compute-0 ceph-mon[75222]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:18:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:18:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:18:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:18:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:18:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:18:33 compute-0 sudo[211632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:18:33 compute-0 sudo[211632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:33 compute-0 sudo[211684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urghyynsivdrghvrqoxmcfxoeamfnczw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297113.3866751-1353-210894325612128/AnsiballZ_command.py'
Dec 09 16:18:33 compute-0 sudo[211684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:33 compute-0 python3.9[211686]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:18:33 compute-0 podman[211699]: 2025-12-09 16:18:33.971712243 +0000 UTC m=+0.045424430 container create 1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:18:34 compute-0 sudo[211684]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:34 compute-0 systemd[1]: Started libpod-conmon-1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb.scope.
Dec 09 16:18:34 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:18:34 compute-0 podman[211699]: 2025-12-09 16:18:33.949155688 +0000 UTC m=+0.022867905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:18:34 compute-0 podman[211699]: 2025-12-09 16:18:34.051258083 +0000 UTC m=+0.124970300 container init 1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:18:34 compute-0 podman[211699]: 2025-12-09 16:18:34.057031665 +0000 UTC m=+0.130743852 container start 1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:18:34 compute-0 podman[211699]: 2025-12-09 16:18:34.060661577 +0000 UTC m=+0.134373784 container attach 1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:18:34 compute-0 xenodochial_bose[211716]: 167 167
Dec 09 16:18:34 compute-0 systemd[1]: libpod-1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb.scope: Deactivated successfully.
Dec 09 16:18:34 compute-0 podman[211699]: 2025-12-09 16:18:34.061958104 +0000 UTC m=+0.135670291 container died 1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d67a9d014c2fda8733189f5fa2440beae9b766c9cced90caec7c0caf0a3e1b5-merged.mount: Deactivated successfully.
Dec 09 16:18:34 compute-0 podman[211699]: 2025-12-09 16:18:34.101259101 +0000 UTC m=+0.174971288 container remove 1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:18:34 compute-0 systemd[1]: libpod-conmon-1f856363ed2a3911373a18ddfb127d2e09a112fae34a22f5d16928e5f6c368eb.scope: Deactivated successfully.
Dec 09 16:18:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:34 compute-0 podman[211811]: 2025-12-09 16:18:34.260934327 +0000 UTC m=+0.047013525 container create 216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:18:34 compute-0 systemd[1]: Started libpod-conmon-216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3.scope.
Dec 09 16:18:34 compute-0 podman[211811]: 2025-12-09 16:18:34.242491218 +0000 UTC m=+0.028570436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:18:34 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e38cdb663efaac2cb3e82b8071f06e2efb577a4582ef51673860478e122af24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e38cdb663efaac2cb3e82b8071f06e2efb577a4582ef51673860478e122af24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e38cdb663efaac2cb3e82b8071f06e2efb577a4582ef51673860478e122af24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e38cdb663efaac2cb3e82b8071f06e2efb577a4582ef51673860478e122af24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e38cdb663efaac2cb3e82b8071f06e2efb577a4582ef51673860478e122af24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:34 compute-0 podman[211811]: 2025-12-09 16:18:34.361141818 +0000 UTC m=+0.147221016 container init 216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_villani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:18:34 compute-0 podman[211811]: 2025-12-09 16:18:34.369673358 +0000 UTC m=+0.155752546 container start 216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:18:34 compute-0 podman[211811]: 2025-12-09 16:18:34.373032583 +0000 UTC m=+0.159111801 container attach 216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:18:34 compute-0 sudo[211911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwyvpmghytuenyphgslorewvpkkgexmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297114.161613-1361-224128118484664/AnsiballZ_stat.py'
Dec 09 16:18:34 compute-0 sudo[211911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:34 compute-0 python3.9[211913]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:18:34 compute-0 sudo[211911]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:34 compute-0 mystifying_villani[211856]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:18:34 compute-0 mystifying_villani[211856]: --> All data devices are unavailable
Dec 09 16:18:34 compute-0 systemd[1]: libpod-216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3.scope: Deactivated successfully.
Dec 09 16:18:34 compute-0 podman[211811]: 2025-12-09 16:18:34.936824659 +0000 UTC m=+0.722903877 container died 216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e38cdb663efaac2cb3e82b8071f06e2efb577a4582ef51673860478e122af24-merged.mount: Deactivated successfully.
Dec 09 16:18:34 compute-0 podman[211811]: 2025-12-09 16:18:34.993949617 +0000 UTC m=+0.780028795 container remove 216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:18:35 compute-0 systemd[1]: libpod-conmon-216b46067a03f47a05e874c25dcf5e87f34c782fd48f6af9f3d659aa24ed55c3.scope: Deactivated successfully.
Dec 09 16:18:35 compute-0 sudo[211632]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:35 compute-0 sudo[212067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:18:35 compute-0 sudo[212067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:35 compute-0 sudo[212067]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:35 compute-0 sudo[212117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rexpkdrxzeljwuujdkurwhsftkcehhqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297114.818524-1369-41646034739021/AnsiballZ_command.py'
Dec 09 16:18:35 compute-0 sudo[212117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:35 compute-0 sudo[212120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:18:35 compute-0 sudo[212120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:35 compute-0 python3.9[212123]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:18:35 compute-0 sudo[212117]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:35 compute-0 podman[212161]: 2025-12-09 16:18:35.405519297 +0000 UTC m=+0.049271879 container create eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:18:35 compute-0 systemd[1]: Started libpod-conmon-eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf.scope.
Dec 09 16:18:35 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:18:35 compute-0 podman[212161]: 2025-12-09 16:18:35.385160054 +0000 UTC m=+0.028912686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:18:35 compute-0 podman[212161]: 2025-12-09 16:18:35.482862385 +0000 UTC m=+0.126615017 container init eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:18:35 compute-0 podman[212161]: 2025-12-09 16:18:35.489268415 +0000 UTC m=+0.133021027 container start eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:18:35 compute-0 stupefied_carver[212202]: 167 167
Dec 09 16:18:35 compute-0 podman[212161]: 2025-12-09 16:18:35.493444913 +0000 UTC m=+0.137197515 container attach eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:18:35 compute-0 systemd[1]: libpod-eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf.scope: Deactivated successfully.
Dec 09 16:18:35 compute-0 podman[212161]: 2025-12-09 16:18:35.494847652 +0000 UTC m=+0.138600234 container died eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 09 16:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0d9c66a4cc665f0aa425748a572719216b43598ed219b316c123888a618abec-merged.mount: Deactivated successfully.
Dec 09 16:18:35 compute-0 podman[212161]: 2025-12-09 16:18:35.532143853 +0000 UTC m=+0.175896435 container remove eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:18:35 compute-0 systemd[1]: libpod-conmon-eb72da79ad46a1ff841bbb5deba8bafad8c6f908a4735f3b1b0e6d2933cb2faf.scope: Deactivated successfully.
Dec 09 16:18:35 compute-0 ceph-mon[75222]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:35 compute-0 podman[212298]: 2025-12-09 16:18:35.716155044 +0000 UTC m=+0.039347049 container create 8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_tu, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:18:35 compute-0 systemd[1]: Started libpod-conmon-8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770.scope.
Dec 09 16:18:35 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528c593c87d9d30564b7bcd3942df0c378ea4c808aa0f9f39a991dc2687095fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528c593c87d9d30564b7bcd3942df0c378ea4c808aa0f9f39a991dc2687095fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528c593c87d9d30564b7bcd3942df0c378ea4c808aa0f9f39a991dc2687095fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528c593c87d9d30564b7bcd3942df0c378ea4c808aa0f9f39a991dc2687095fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:35 compute-0 podman[212298]: 2025-12-09 16:18:35.790792736 +0000 UTC m=+0.113984771 container init 8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_tu, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:18:35 compute-0 podman[212298]: 2025-12-09 16:18:35.697619302 +0000 UTC m=+0.020811347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:18:35 compute-0 podman[212298]: 2025-12-09 16:18:35.802692571 +0000 UTC m=+0.125884576 container start 8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:18:35 compute-0 podman[212298]: 2025-12-09 16:18:35.806396746 +0000 UTC m=+0.129588781 container attach 8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:18:35 compute-0 sudo[212372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ursducdizykuvdqsoanmbquffuukkpac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297115.542806-1377-127674259771959/AnsiballZ_file.py'
Dec 09 16:18:35 compute-0 sudo[212372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:36 compute-0 python3.9[212374]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:36 compute-0 sudo[212372]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:36 compute-0 nervous_tu[212335]: {
Dec 09 16:18:36 compute-0 nervous_tu[212335]:     "0": [
Dec 09 16:18:36 compute-0 nervous_tu[212335]:         {
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "devices": [
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "/dev/loop3"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             ],
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_name": "ceph_lv0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_size": "21470642176",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "name": "ceph_lv0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "tags": {
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cluster_name": "ceph",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.crush_device_class": "",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.encrypted": "0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.objectstore": "bluestore",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osd_id": "0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.type": "block",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.vdo": "0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.with_tpm": "0"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             },
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "type": "block",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "vg_name": "ceph_vg0"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:         }
Dec 09 16:18:36 compute-0 nervous_tu[212335]:     ],
Dec 09 16:18:36 compute-0 nervous_tu[212335]:     "1": [
Dec 09 16:18:36 compute-0 nervous_tu[212335]:         {
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "devices": [
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "/dev/loop4"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             ],
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_name": "ceph_lv1",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_size": "21470642176",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "name": "ceph_lv1",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "tags": {
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cluster_name": "ceph",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.crush_device_class": "",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.encrypted": "0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.objectstore": "bluestore",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osd_id": "1",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.type": "block",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.vdo": "0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.with_tpm": "0"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             },
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "type": "block",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "vg_name": "ceph_vg1"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:         }
Dec 09 16:18:36 compute-0 nervous_tu[212335]:     ],
Dec 09 16:18:36 compute-0 nervous_tu[212335]:     "2": [
Dec 09 16:18:36 compute-0 nervous_tu[212335]:         {
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "devices": [
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "/dev/loop5"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             ],
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_name": "ceph_lv2",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_size": "21470642176",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "name": "ceph_lv2",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "tags": {
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.cluster_name": "ceph",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.crush_device_class": "",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.encrypted": "0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.objectstore": "bluestore",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osd_id": "2",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.type": "block",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.vdo": "0",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:                 "ceph.with_tpm": "0"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             },
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "type": "block",
Dec 09 16:18:36 compute-0 nervous_tu[212335]:             "vg_name": "ceph_vg2"
Dec 09 16:18:36 compute-0 nervous_tu[212335]:         }
Dec 09 16:18:36 compute-0 nervous_tu[212335]:     ]
Dec 09 16:18:36 compute-0 nervous_tu[212335]: }
Dec 09 16:18:36 compute-0 systemd[1]: libpod-8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770.scope: Deactivated successfully.
Dec 09 16:18:36 compute-0 podman[212298]: 2025-12-09 16:18:36.128852486 +0000 UTC m=+0.452044481 container died 8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_tu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 09 16:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-528c593c87d9d30564b7bcd3942df0c378ea4c808aa0f9f39a991dc2687095fa-merged.mount: Deactivated successfully.
Dec 09 16:18:36 compute-0 podman[212298]: 2025-12-09 16:18:36.173148543 +0000 UTC m=+0.496340538 container remove 8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:18:36 compute-0 systemd[1]: libpod-conmon-8c9ce05a990aa81d0c16ebc3ea1cfe08f923df30d9e1783d22b8751e95b9c770.scope: Deactivated successfully.
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:36 compute-0 sudo[212120]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:36 compute-0 sudo[212438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:18:36 compute-0 sudo[212438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:36 compute-0 sudo[212438]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:36 compute-0 sudo[212490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:18:36 compute-0 sudo[212490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:36 compute-0 sudo[212592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyvbyeuzkzdaqutikohtywjccdxyjxoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297116.244684-1385-173963792306326/AnsiballZ_stat.py'
Dec 09 16:18:36 compute-0 sudo[212592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:18:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:18:36 compute-0 podman[212605]: 2025-12-09 16:18:36.613083082 +0000 UTC m=+0.041313625 container create 0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_dirac, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:18:36 compute-0 systemd[1]: Started libpod-conmon-0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4.scope.
Dec 09 16:18:36 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:18:36 compute-0 podman[212605]: 2025-12-09 16:18:36.5963285 +0000 UTC m=+0.024559063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:18:36 compute-0 podman[212605]: 2025-12-09 16:18:36.702277123 +0000 UTC m=+0.130507686 container init 0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:18:36 compute-0 podman[212605]: 2025-12-09 16:18:36.707959473 +0000 UTC m=+0.136190016 container start 0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:18:36 compute-0 podman[212605]: 2025-12-09 16:18:36.711297087 +0000 UTC m=+0.139527630 container attach 0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_dirac, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:18:36 compute-0 flamboyant_dirac[212621]: 167 167
Dec 09 16:18:36 compute-0 systemd[1]: libpod-0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4.scope: Deactivated successfully.
Dec 09 16:18:36 compute-0 podman[212605]: 2025-12-09 16:18:36.713137339 +0000 UTC m=+0.141367882 container died 0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:18:36 compute-0 python3.9[212603]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-54687c6bf4a3d669e2f67e10c0c57fb9ef44540ccbc5dbfe7c176fc11ccf2146-merged.mount: Deactivated successfully.
Dec 09 16:18:36 compute-0 sudo[212592]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:36 compute-0 podman[212605]: 2025-12-09 16:18:36.748503605 +0000 UTC m=+0.176734148 container remove 0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_dirac, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:18:36 compute-0 systemd[1]: libpod-conmon-0f7a8d996e9e2840d94c7949ab7df73530d11ffa407f0a38014623439ef44ed4.scope: Deactivated successfully.
Dec 09 16:18:36 compute-0 podman[212684]: 2025-12-09 16:18:36.903058067 +0000 UTC m=+0.036211000 container create 7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:18:36 compute-0 systemd[1]: Started libpod-conmon-7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1.scope.
Dec 09 16:18:36 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb1bf1f3d67778580d39d7b956ed0dc0f84359c7af9368c601c6cfcc698ed09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb1bf1f3d67778580d39d7b956ed0dc0f84359c7af9368c601c6cfcc698ed09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb1bf1f3d67778580d39d7b956ed0dc0f84359c7af9368c601c6cfcc698ed09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb1bf1f3d67778580d39d7b956ed0dc0f84359c7af9368c601c6cfcc698ed09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:18:36 compute-0 podman[212684]: 2025-12-09 16:18:36.88753427 +0000 UTC m=+0.020687233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:18:36 compute-0 podman[212684]: 2025-12-09 16:18:36.988636737 +0000 UTC m=+0.121789730 container init 7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:18:37 compute-0 podman[212684]: 2025-12-09 16:18:37.001047157 +0000 UTC m=+0.134200130 container start 7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:18:37 compute-0 podman[212684]: 2025-12-09 16:18:37.005866352 +0000 UTC m=+0.139019305 container attach 7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:18:37 compute-0 sudo[212786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vspgtlyigwmsmdyrcvfmrhgdviagwmsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297116.244684-1385-173963792306326/AnsiballZ_copy.py'
Dec 09 16:18:37 compute-0 sudo[212786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:37 compute-0 python3.9[212788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297116.244684-1385-173963792306326/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:37 compute-0 sudo[212786]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:37 compute-0 ceph-mon[75222]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:37 compute-0 lvm[212978]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:18:37 compute-0 lvm[212980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:18:37 compute-0 lvm[212980]: VG ceph_vg1 finished
Dec 09 16:18:37 compute-0 lvm[212978]: VG ceph_vg0 finished
Dec 09 16:18:37 compute-0 lvm[212988]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:18:37 compute-0 lvm[212988]: VG ceph_vg2 finished
Dec 09 16:18:37 compute-0 sudo[213015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzcexkonslvddajzfondutwfwxjouihi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297117.489438-1400-157896110074552/AnsiballZ_stat.py'
Dec 09 16:18:37 compute-0 sudo[213015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:37 compute-0 nifty_leakey[212731]: {}
Dec 09 16:18:37 compute-0 systemd[1]: libpod-7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1.scope: Deactivated successfully.
Dec 09 16:18:37 compute-0 podman[212684]: 2025-12-09 16:18:37.867797174 +0000 UTC m=+1.000950127 container died 7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:18:37 compute-0 systemd[1]: libpod-7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1.scope: Consumed 1.368s CPU time.
Dec 09 16:18:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdb1bf1f3d67778580d39d7b956ed0dc0f84359c7af9368c601c6cfcc698ed09-merged.mount: Deactivated successfully.
Dec 09 16:18:37 compute-0 podman[212684]: 2025-12-09 16:18:37.918138901 +0000 UTC m=+1.051291844 container remove 7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:18:37 compute-0 systemd[1]: libpod-conmon-7757161aeb97f850a0ee999ebb23e5ab194aab44cf3b6031b506fa508e33fca1.scope: Deactivated successfully.
Dec 09 16:18:37 compute-0 sudo[212490]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:18:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:18:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:18:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:18:37 compute-0 python3.9[213018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:37 compute-0 sudo[213015]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:38 compute-0 sudo[213032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:18:38 compute-0 sudo[213032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:18:38 compute-0 sudo[213032]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:38 compute-0 sudo[213177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilnaekhuusmsreupzodbgozfbtqhzlhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297117.489438-1400-157896110074552/AnsiballZ_copy.py'
Dec 09 16:18:38 compute-0 sudo[213177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:38 compute-0 python3.9[213179]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297117.489438-1400-157896110074552/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:38 compute-0 sudo[213177]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:18:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:18:38 compute-0 ceph-mon[75222]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:39 compute-0 sudo[213329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orkrntlfvamlfguexzgzrhbxbkwladzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297118.7577326-1415-149201983537461/AnsiballZ_stat.py'
Dec 09 16:18:39 compute-0 sudo[213329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:39 compute-0 python3.9[213331]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:18:39 compute-0 sudo[213329]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:39 compute-0 sudo[213452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdfklgkvmuugjnfhcuzhfbwuwrlzsjea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297118.7577326-1415-149201983537461/AnsiballZ_copy.py'
Dec 09 16:18:39 compute-0 sudo[213452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:39 compute-0 python3.9[213454]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297118.7577326-1415-149201983537461/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:18:39 compute-0 sudo[213452]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:40 compute-0 sudo[213604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ausvfvwxensqpymesuiwaevogcvgceuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297120.0533586-1430-257677325177612/AnsiballZ_systemd.py'
Dec 09 16:18:40 compute-0 sudo[213604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:40 compute-0 python3.9[213606]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:18:40 compute-0 systemd[1]: Reloading.
Dec 09 16:18:40 compute-0 systemd-rc-local-generator[213636]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:18:40 compute-0 systemd-sysv-generator[213639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:18:41 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 09 16:18:41 compute-0 sudo[213604]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:41 compute-0 ceph-mon[75222]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:41 compute-0 sudo[213797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqbfrrqfkykfglebcbonoervtfzoyqep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297121.3883634-1438-134535551114339/AnsiballZ_systemd.py'
Dec 09 16:18:41 compute-0 sudo[213797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:42 compute-0 python3.9[213799]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 09 16:18:42 compute-0 systemd[1]: Reloading.
Dec 09 16:18:42 compute-0 systemd-rc-local-generator[213826]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:18:42 compute-0 systemd-sysv-generator[213830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:18:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:42 compute-0 systemd[1]: Reloading.
Dec 09 16:18:42 compute-0 systemd-sysv-generator[213868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:18:42 compute-0 systemd-rc-local-generator[213864]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:18:42 compute-0 sudo[213797]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:43 compute-0 sshd-session[155223]: Connection closed by 192.168.122.30 port 55954
Dec 09 16:18:43 compute-0 sshd-session[155220]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:18:43 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 09 16:18:43 compute-0 systemd[1]: session-49.scope: Consumed 3min 27.970s CPU time.
Dec 09 16:18:43 compute-0 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Dec 09 16:18:43 compute-0 systemd-logind[786]: Removed session 49.
Dec 09 16:18:43 compute-0 ceph-mon[75222]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:45 compute-0 ceph-mon[75222]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:46 compute-0 podman[213896]: 2025-12-09 16:18:46.703556343 +0000 UTC m=+0.149552682 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 16:18:47 compute-0 ceph-mon[75222]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:49 compute-0 sshd-session[213923]: Accepted publickey for zuul from 192.168.122.30 port 43022 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:18:49 compute-0 systemd-logind[786]: New session 50 of user zuul.
Dec 09 16:18:49 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 09 16:18:49 compute-0 sshd-session[213923]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:18:49 compute-0 podman[213925]: 2025-12-09 16:18:49.190642287 +0000 UTC m=+0.052435738 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:18:49 compute-0 ceph-mon[75222]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:50 compute-0 python3.9[214095]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:18:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:51 compute-0 python3.9[214249]: ansible-ansible.builtin.service_facts Invoked
Dec 09 16:18:51 compute-0 network[214266]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 16:18:51 compute-0 network[214267]: 'network-scripts' will be removed from distribution in near future.
Dec 09 16:18:51 compute-0 network[214268]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 16:18:51 compute-0 ceph-mon[75222]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:53 compute-0 ceph-mon[75222]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:54 compute-0 ceph-mon[75222]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:55 compute-0 sudo[214538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idskuuagqzeadnbjksxixnkfzwaflxeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297134.781532-47-114973249960684/AnsiballZ_setup.py'
Dec 09 16:18:55 compute-0 sudo[214538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:18:55 compute-0 python3.9[214540]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 16:18:55 compute-0 sudo[214538]: pam_unix(sudo:session): session closed for user root
Dec 09 16:18:56 compute-0 sudo[214622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdwiydqwqrfaahjsogwnmgklkgrbrdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297134.781532-47-114973249960684/AnsiballZ_dnf.py'
Dec 09 16:18:56 compute-0 sudo[214622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:18:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:56 compute-0 python3.9[214624]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:18:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:18:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:18:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:18:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:18:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:18:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:18:57 compute-0 ceph-mon[75222]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:18:58 compute-0 sshd-session[214626]: Invalid user odoo from 146.190.31.45 port 39792
Dec 09 16:18:58 compute-0 sshd-session[214626]: Connection closed by invalid user odoo 146.190.31.45 port 39792 [preauth]
Dec 09 16:18:59 compute-0 ceph-mon[75222]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:01 compute-0 ceph-mon[75222]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:01 compute-0 sudo[214622]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:02 compute-0 sudo[214777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdyhmssbpxpeeimsqxdvydsiivrskiiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297142.1776042-59-184993676666935/AnsiballZ_stat.py'
Dec 09 16:19:02 compute-0 sudo[214777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:02 compute-0 python3.9[214779]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:19:02 compute-0 sudo[214777]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:02 compute-0 ceph-mon[75222]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:03 compute-0 sudo[214929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdtounjzfmccicijhxkpnnjkvjylpneh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297143.1543162-69-4763074388448/AnsiballZ_command.py'
Dec 09 16:19:03 compute-0 sudo[214929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:03 compute-0 python3.9[214931]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:19:03 compute-0 sudo[214929]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:04 compute-0 sudo[215082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrmtpwdhaexhowybchuramjwqmagnqwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297144.1767669-79-96245953881698/AnsiballZ_stat.py'
Dec 09 16:19:04 compute-0 sudo[215082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:04 compute-0 python3.9[215084]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:19:04 compute-0 sudo[215082]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:05 compute-0 sudo[215234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oafcbzyznofgxfmpgwihhefhfuwzfrhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297144.812517-87-280825664823462/AnsiballZ_command.py'
Dec 09 16:19:05 compute-0 sudo[215234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:05 compute-0 python3.9[215236]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:19:05 compute-0 sudo[215234]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:05 compute-0 ceph-mon[75222]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:05 compute-0 sudo[215387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnhrnirvvavafqcikregrgvjvgjflixz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297145.4890764-95-279167983275638/AnsiballZ_stat.py'
Dec 09 16:19:05 compute-0 sudo[215387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:05 compute-0 python3.9[215389]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:05 compute-0 sudo[215387]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:06 compute-0 sudo[215510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgibpvuxsuqikwcrkkjnbublcutlocdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297145.4890764-95-279167983275638/AnsiballZ_copy.py'
Dec 09 16:19:06 compute-0 sudo[215510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:06 compute-0 python3.9[215512]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297145.4890764-95-279167983275638/.source.iscsi _original_basename=.bunak70b follow=False checksum=a97be65ae020da3b5b244c11b89858358da2acbd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:06 compute-0 sudo[215510]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:07 compute-0 sudo[215662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzssdbiujmfmaqdhmpsvqqpeirzhybev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297146.8618655-110-154497711030468/AnsiballZ_file.py'
Dec 09 16:19:07 compute-0 sudo[215662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:07 compute-0 python3.9[215664]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:07 compute-0 sudo[215662]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:07 compute-0 ceph-mon[75222]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:08 compute-0 sudo[215814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixdutccsoppjzmzkztztmyyxjavxzslp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297147.8019454-118-261706713510568/AnsiballZ_lineinfile.py'
Dec 09 16:19:08 compute-0 sudo[215814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:08 compute-0 python3.9[215816]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:08 compute-0 sudo[215814]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:09 compute-0 sudo[215966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoihxrboudbppapbjafvxwwscbljvria ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297148.8510103-127-76315853657061/AnsiballZ_systemd_service.py'
Dec 09 16:19:09 compute-0 sudo[215966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:09 compute-0 python3.9[215968]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:19:09 compute-0 ceph-mon[75222]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:09 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 09 16:19:09 compute-0 sudo[215966]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:10 compute-0 sudo[216122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxjqhpymgaojfiuqhauibycrkalccjgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297150.0171077-135-83061968119505/AnsiballZ_systemd_service.py'
Dec 09 16:19:10 compute-0 sudo[216122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:10 compute-0 python3.9[216124]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:19:10 compute-0 systemd[1]: Reloading.
Dec 09 16:19:10 compute-0 systemd-rc-local-generator[216154]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:19:10 compute-0 systemd-sysv-generator[216157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:19:11 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 09 16:19:11 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 09 16:19:11 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 09 16:19:11 compute-0 systemd[1]: Started Open-iSCSI.
Dec 09 16:19:11 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 09 16:19:11 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 09 16:19:11 compute-0 sudo[216122]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:11 compute-0 sudo[216322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeiymbfjlncgxbtebetcsycubdkkymzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297151.468207-146-217616787488179/AnsiballZ_service_facts.py'
Dec 09 16:19:11 compute-0 sudo[216322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:11 compute-0 ceph-mon[75222]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:11 compute-0 python3.9[216324]: ansible-ansible.builtin.service_facts Invoked
Dec 09 16:19:12 compute-0 network[216341]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 16:19:12 compute-0 network[216342]: 'network-scripts' will be removed from distribution in near future.
Dec 09 16:19:12 compute-0 network[216343]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 16:19:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:13 compute-0 ceph-mon[75222]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:15 compute-0 sudo[216322]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:15 compute-0 sudo[216613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifnxficjyutmugvuapuiesccpgidqtkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297155.5585566-156-93522176615623/AnsiballZ_file.py'
Dec 09 16:19:15 compute-0 sudo[216613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:15 compute-0 ceph-mon[75222]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:15 compute-0 python3.9[216615]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 09 16:19:16 compute-0 sudo[216613]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:16 compute-0 sudo[216765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqtrdlgblsgfrkesccuixmrkckuphvya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297156.1948729-164-31439138135134/AnsiballZ_modprobe.py'
Dec 09 16:19:16 compute-0 sudo[216765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:16 compute-0 python3.9[216767]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 09 16:19:16 compute-0 sudo[216765]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:17 compute-0 podman[216771]: 2025-12-09 16:19:17.014605983 +0000 UTC m=+0.118464887 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:19:17 compute-0 sudo[216946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fisanneenjricrexsicirejllppgqcaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297157.0910518-172-277608490105847/AnsiballZ_stat.py'
Dec 09 16:19:17 compute-0 sudo[216946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:17 compute-0 python3.9[216948]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:17 compute-0 sudo[216946]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:19:17.835 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:19:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:19:17.836 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:19:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:19:17.836 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:19:18 compute-0 sudo[217069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzdyrepvzjncqipoqnbkzjlklbxfjmdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297157.0910518-172-277608490105847/AnsiballZ_copy.py'
Dec 09 16:19:18 compute-0 sudo[217069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:18 compute-0 ceph-mon[75222]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:18 compute-0 python3.9[217071]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297157.0910518-172-277608490105847/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:18 compute-0 sudo[217069]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:18 compute-0 sudo[217221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfvhhlwflxtarpvcdpemxwwwxzutbuer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297158.4843605-188-217226466960481/AnsiballZ_lineinfile.py'
Dec 09 16:19:18 compute-0 sudo[217221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:18 compute-0 python3.9[217223]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:18 compute-0 sudo[217221]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:19 compute-0 ceph-mon[75222]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:19 compute-0 podman[217300]: 2025-12-09 16:19:19.618679162 +0000 UTC m=+0.060012491 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:19:19 compute-0 sudo[217392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtdmuneyncaupvoyjkrofdrgmaycqwnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297159.1809478-196-18194085511547/AnsiballZ_systemd.py'
Dec 09 16:19:19 compute-0 sudo[217392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:20 compute-0 python3.9[217394]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:19:20 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 09 16:19:20 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 09 16:19:20 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 09 16:19:20 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 09 16:19:20 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 09 16:19:20 compute-0 sudo[217392]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:20 compute-0 sudo[217548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfrmusmtlbgrpvclfhwfhafmdfpfvioe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297160.457926-204-170486596545648/AnsiballZ_file.py'
Dec 09 16:19:20 compute-0 sudo[217548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:21 compute-0 python3.9[217550]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:19:21 compute-0 sudo[217548]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:21 compute-0 ceph-mon[75222]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:21 compute-0 sudo[217700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnuvymczrrewrdzrpnfyeqqjezpbsnpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297161.3600452-213-231288715180286/AnsiballZ_stat.py'
Dec 09 16:19:21 compute-0 sudo[217700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:21 compute-0 python3.9[217702]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:19:21 compute-0 sudo[217700]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:22 compute-0 sudo[217852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxkwzejtbphqwyrpnytkhzdfzqeqmhce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297162.0986767-222-34143161419563/AnsiballZ_stat.py'
Dec 09 16:19:22 compute-0 sudo[217852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:22 compute-0 python3.9[217854]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:19:22 compute-0 sudo[217852]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:22 compute-0 ceph-mon[75222]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:23 compute-0 sudo[218004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvmvmyxyyepuybizinhviyysfnzrxmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297162.874464-230-190980510613754/AnsiballZ_stat.py'
Dec 09 16:19:23 compute-0 sudo[218004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:23 compute-0 python3.9[218006]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:23 compute-0 sudo[218004]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:23 compute-0 sudo[218127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rytxposcjvfbosmepxccafmftemudfso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297162.874464-230-190980510613754/AnsiballZ_copy.py'
Dec 09 16:19:23 compute-0 sudo[218127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:24 compute-0 python3.9[218129]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297162.874464-230-190980510613754/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:24 compute-0 sudo[218127]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:24 compute-0 sudo[218279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlcsgvmpwhbhdmcmbtavqayigslvhmpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297164.2332487-245-193849224113088/AnsiballZ_command.py'
Dec 09 16:19:24 compute-0 sudo[218279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:24 compute-0 python3.9[218281]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:19:24 compute-0 sudo[218279]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:25 compute-0 sudo[218432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovssspqztjlrieghwcmettcovwhmrixd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297164.9378202-253-32016368414419/AnsiballZ_lineinfile.py'
Dec 09 16:19:25 compute-0 sudo[218432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:25 compute-0 ceph-mon[75222]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:25 compute-0 python3.9[218434]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:25 compute-0 sudo[218432]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:19:25
Dec 09 16:19:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:19:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:19:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'default.rgw.log', 'images', 'backups', 'volumes', '.mgr', '.rgw.root']
Dec 09 16:19:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:19:26 compute-0 sudo[218584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eglmivmyzpvmswfiprnmigcxpkdzfgpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297165.5666058-261-58517581662917/AnsiballZ_replace.py'
Dec 09 16:19:26 compute-0 sudo[218584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:26 compute-0 python3.9[218586]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:26 compute-0 sudo[218584]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:19:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:19:26 compute-0 sudo[218736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrtwvixfnozlmavclsxukwninxlpuluq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297166.479584-269-1125599970703/AnsiballZ_replace.py'
Dec 09 16:19:26 compute-0 sudo[218736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:26 compute-0 python3.9[218738]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:26 compute-0 sudo[218736]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:27 compute-0 ceph-mon[75222]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:27 compute-0 sudo[218888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkuvdtygaaiwfoafhqeielaiyyjrfsbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297167.191689-278-104652715709287/AnsiballZ_lineinfile.py'
Dec 09 16:19:27 compute-0 sudo[218888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:27 compute-0 python3.9[218890]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:27 compute-0 sudo[218888]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:28 compute-0 sudo[219040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmbjjarkrktiojmsmsxiadmecehnzazz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297167.8006256-278-160538557674012/AnsiballZ_lineinfile.py'
Dec 09 16:19:28 compute-0 sudo[219040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:28 compute-0 python3.9[219042]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:28 compute-0 sudo[219040]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:28 compute-0 sudo[219192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxbtzebivqprhsgsmltgfmrpdchpwcjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297168.4476583-278-13473081612551/AnsiballZ_lineinfile.py'
Dec 09 16:19:28 compute-0 sudo[219192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:28 compute-0 python3.9[219194]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:28 compute-0 sudo[219192]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:29 compute-0 sudo[219344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvtufvkfyfnobboroychrwuhqtakeeyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297169.0389278-278-135015783156457/AnsiballZ_lineinfile.py'
Dec 09 16:19:29 compute-0 sudo[219344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:29 compute-0 python3.9[219346]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:29 compute-0 sudo[219344]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:29 compute-0 ceph-mon[75222]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:30 compute-0 sudo[219496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgtqlnqhizhvsfsapmjrpeibysbivouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297169.7239797-307-277856260728086/AnsiballZ_stat.py'
Dec 09 16:19:30 compute-0 sudo[219496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:30 compute-0 python3.9[219498]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:19:30 compute-0 sudo[219496]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:30 compute-0 sudo[219650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfqweahpealxryevwkubafkyrqmdmevt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297170.5149693-315-107830523660033/AnsiballZ_file.py'
Dec 09 16:19:30 compute-0 sudo[219650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:31 compute-0 python3.9[219652]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:31 compute-0 sudo[219650]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:31 compute-0 ceph-mon[75222]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:31 compute-0 sudo[219802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmlklxbripigxkxkynjbxdmdzntqrlcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297171.3682563-324-182185423824576/AnsiballZ_file.py'
Dec 09 16:19:31 compute-0 sudo[219802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:31 compute-0 python3.9[219804]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:19:31 compute-0 sudo[219802]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:32 compute-0 sudo[219954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-impeatvpvdfelaaxnzaqumyajonwxvym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297172.060036-332-156550747264584/AnsiballZ_stat.py'
Dec 09 16:19:32 compute-0 sudo[219954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:32 compute-0 python3.9[219956]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:32 compute-0 sudo[219954]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:32 compute-0 sudo[220032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiaowndxejhgfdsyfguetamhzamwrymj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297172.060036-332-156550747264584/AnsiballZ_file.py'
Dec 09 16:19:32 compute-0 sudo[220032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:33 compute-0 python3.9[220034]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:19:33 compute-0 sudo[220032]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:33 compute-0 sudo[220184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuvkfgyfwwwycztyewpaumwovaxcqzai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297173.20918-332-231055808511382/AnsiballZ_stat.py'
Dec 09 16:19:33 compute-0 sudo[220184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:33 compute-0 ceph-mon[75222]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:33 compute-0 python3.9[220186]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:33 compute-0 sudo[220184]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:33 compute-0 sudo[220262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbasfaglsdqlwwdpehfmfpdocamgmyiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297173.20918-332-231055808511382/AnsiballZ_file.py'
Dec 09 16:19:33 compute-0 sudo[220262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:34 compute-0 python3.9[220264]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:19:34 compute-0 sudo[220262]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:34 compute-0 sudo[220414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omukewhknsctffjmyqtglwfzirphgegn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297174.398388-355-97502534590728/AnsiballZ_file.py'
Dec 09 16:19:34 compute-0 sudo[220414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:34 compute-0 python3.9[220416]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:35 compute-0 sudo[220414]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:35 compute-0 sudo[220566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivvytgvdvxkcehsrmggjjqkfaprfwcnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297175.2040658-363-220595941756771/AnsiballZ_stat.py'
Dec 09 16:19:35 compute-0 sudo[220566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:35 compute-0 ceph-mon[75222]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:35 compute-0 python3.9[220568]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:35 compute-0 sudo[220566]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:36 compute-0 sudo[220644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sszxkgnjgheasxgrejndvkuxpmugokih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297175.2040658-363-220595941756771/AnsiballZ_file.py'
Dec 09 16:19:36 compute-0 sudo[220644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:36 compute-0 python3.9[220646]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:36 compute-0 sudo[220644]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:19:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:19:36 compute-0 sudo[220796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhxbsppsawkcxwecmkroxhgcfmowrzfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297176.4746304-375-162913663810621/AnsiballZ_stat.py'
Dec 09 16:19:36 compute-0 sudo[220796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:37 compute-0 python3.9[220798]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:37 compute-0 sudo[220796]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:37 compute-0 sudo[220874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpeizwzfytwzvukmyldswidoznjowkxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297176.4746304-375-162913663810621/AnsiballZ_file.py'
Dec 09 16:19:37 compute-0 sudo[220874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:37 compute-0 python3.9[220876]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:37 compute-0 sudo[220874]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:37 compute-0 ceph-mon[75222]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:38 compute-0 sudo[221026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbritapgusbamgfjvgxcxakfmkphptvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297177.7131622-387-264404008827569/AnsiballZ_systemd.py'
Dec 09 16:19:38 compute-0 sudo[221026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:38 compute-0 sudo[221029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:19:38 compute-0 sudo[221029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:38 compute-0 sudo[221029]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:38 compute-0 sudo[221054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:19:38 compute-0 sudo[221054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:38 compute-0 python3.9[221028]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:19:38 compute-0 systemd[1]: Reloading.
Dec 09 16:19:38 compute-0 systemd-sysv-generator[221122]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:19:38 compute-0 systemd-rc-local-generator[221118]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:19:38 compute-0 sudo[221026]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:38 compute-0 sudo[221054]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:19:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:19:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:19:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:19:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:19:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:19:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:19:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:19:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:19:38 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:19:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:19:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:19:38 compute-0 sudo[221171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:19:38 compute-0 sudo[221171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:38 compute-0 sudo[221171]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:38 compute-0 sudo[221197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:19:38 compute-0 sudo[221197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:39 compute-0 podman[221317]: 2025-12-09 16:19:39.257462254 +0000 UTC m=+0.047242764 container create b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bhabha, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:19:39 compute-0 systemd[1]: Started libpod-conmon-b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d.scope.
Dec 09 16:19:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:19:39 compute-0 sudo[221378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmmcsbgbwdemztohfdtwfhjysvhhxwfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297178.971982-395-16934907777760/AnsiballZ_stat.py'
Dec 09 16:19:39 compute-0 sudo[221378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:39 compute-0 podman[221317]: 2025-12-09 16:19:39.235804069 +0000 UTC m=+0.025584629 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:19:39 compute-0 podman[221317]: 2025-12-09 16:19:39.342478942 +0000 UTC m=+0.132259502 container init b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bhabha, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:19:39 compute-0 podman[221317]: 2025-12-09 16:19:39.351542469 +0000 UTC m=+0.141322979 container start b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:19:39 compute-0 podman[221317]: 2025-12-09 16:19:39.35543353 +0000 UTC m=+0.145214060 container attach b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bhabha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:19:39 compute-0 jovial_bhabha[221376]: 167 167
Dec 09 16:19:39 compute-0 systemd[1]: libpod-b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d.scope: Deactivated successfully.
Dec 09 16:19:39 compute-0 podman[221317]: 2025-12-09 16:19:39.361686918 +0000 UTC m=+0.151467478 container died b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bhabha, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dcddb374b6b9517dfb03d3d1ec5db4e85afdca0972664372a8542946b71c67d-merged.mount: Deactivated successfully.
Dec 09 16:19:39 compute-0 podman[221317]: 2025-12-09 16:19:39.406556044 +0000 UTC m=+0.196336554 container remove b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:19:39 compute-0 systemd[1]: libpod-conmon-b8dfba6c6040041c2775354a93d382156516fd55c27e0a45852796b9273e381d.scope: Deactivated successfully.
Dec 09 16:19:39 compute-0 python3.9[221381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:39 compute-0 sudo[221378]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:39 compute-0 podman[221402]: 2025-12-09 16:19:39.608157496 +0000 UTC m=+0.059509183 container create b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:19:39 compute-0 ceph-mon[75222]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:19:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:19:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:19:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:19:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:19:39 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:19:39 compute-0 systemd[1]: Started libpod-conmon-b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13.scope.
Dec 09 16:19:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82dcbe63d733e439c90b6e3a6d359ce5329f9cf07b26d46aa3a62868c2d4503/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:39 compute-0 podman[221402]: 2025-12-09 16:19:39.588878638 +0000 UTC m=+0.040230305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82dcbe63d733e439c90b6e3a6d359ce5329f9cf07b26d46aa3a62868c2d4503/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82dcbe63d733e439c90b6e3a6d359ce5329f9cf07b26d46aa3a62868c2d4503/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82dcbe63d733e439c90b6e3a6d359ce5329f9cf07b26d46aa3a62868c2d4503/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82dcbe63d733e439c90b6e3a6d359ce5329f9cf07b26d46aa3a62868c2d4503/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:39 compute-0 podman[221402]: 2025-12-09 16:19:39.700148431 +0000 UTC m=+0.151500098 container init b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:19:39 compute-0 podman[221402]: 2025-12-09 16:19:39.707911302 +0000 UTC m=+0.159262959 container start b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:19:39 compute-0 podman[221402]: 2025-12-09 16:19:39.711115643 +0000 UTC m=+0.162467300 container attach b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:19:39 compute-0 sudo[221498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvhkucccaidjgkvitndqpnjrprusbmau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297178.971982-395-16934907777760/AnsiballZ_file.py'
Dec 09 16:19:39 compute-0 sudo[221498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:40 compute-0 python3.9[221500]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:40 compute-0 sudo[221498]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:40 compute-0 heuristic_williamson[221438]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:19:40 compute-0 heuristic_williamson[221438]: --> All data devices are unavailable
Dec 09 16:19:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:40 compute-0 systemd[1]: libpod-b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13.scope: Deactivated successfully.
Dec 09 16:19:40 compute-0 podman[221402]: 2025-12-09 16:19:40.259713032 +0000 UTC m=+0.711064719 container died b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c82dcbe63d733e439c90b6e3a6d359ce5329f9cf07b26d46aa3a62868c2d4503-merged.mount: Deactivated successfully.
Dec 09 16:19:40 compute-0 podman[221402]: 2025-12-09 16:19:40.312971896 +0000 UTC m=+0.764323553 container remove b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 16:19:40 compute-0 systemd[1]: libpod-conmon-b636a8d5538a2597c63de7dae2f830da0f9b8131c1db60edffaccffc044e1b13.scope: Deactivated successfully.
Dec 09 16:19:40 compute-0 sudo[221197]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:40 compute-0 sudo[221625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:19:40 compute-0 sudo[221625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:40 compute-0 sudo[221625]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:40 compute-0 sudo[221671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:19:40 compute-0 sudo[221671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:40 compute-0 sudo[221728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iumfmhcpbdfabeoeljwzqesujauftrow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297180.246994-407-193978305798529/AnsiballZ_stat.py'
Dec 09 16:19:40 compute-0 sudo[221728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:40 compute-0 python3.9[221730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:40 compute-0 sudo[221728]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:40 compute-0 podman[221745]: 2025-12-09 16:19:40.849066779 +0000 UTC m=+0.053872413 container create 137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carver, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 09 16:19:40 compute-0 systemd[1]: Started libpod-conmon-137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9.scope.
Dec 09 16:19:40 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:19:40 compute-0 podman[221745]: 2025-12-09 16:19:40.818598512 +0000 UTC m=+0.023404226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:19:40 compute-0 podman[221745]: 2025-12-09 16:19:40.915977771 +0000 UTC m=+0.120783415 container init 137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:19:40 compute-0 podman[221745]: 2025-12-09 16:19:40.922511517 +0000 UTC m=+0.127317141 container start 137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:19:40 compute-0 podman[221745]: 2025-12-09 16:19:40.926708426 +0000 UTC m=+0.131514100 container attach 137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:19:40 compute-0 vibrant_carver[221763]: 167 167
Dec 09 16:19:40 compute-0 systemd[1]: libpod-137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9.scope: Deactivated successfully.
Dec 09 16:19:40 compute-0 podman[221745]: 2025-12-09 16:19:40.928964861 +0000 UTC m=+0.133770505 container died 137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carver, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:19:40 compute-0 sshd-session[221731]: Invalid user odoo from 146.190.31.45 port 38814
Dec 09 16:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc51cc9bd142de488868a9b31b4042031f53d9bb8ae985eca8eeac8ed9b5b4fd-merged.mount: Deactivated successfully.
Dec 09 16:19:40 compute-0 podman[221745]: 2025-12-09 16:19:40.973108096 +0000 UTC m=+0.177913720 container remove 137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:19:40 compute-0 systemd[1]: libpod-conmon-137e21b2c3f2b9ceb68e709d0f5ef2b771d8a13103fae339bd645337a6cc5ab9.scope: Deactivated successfully.
Dec 09 16:19:41 compute-0 sshd-session[221731]: Connection closed by invalid user odoo 146.190.31.45 port 38814 [preauth]
Dec 09 16:19:41 compute-0 sudo[221858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpszhbqgxvmlyusopayymqbyukztcnjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297180.246994-407-193978305798529/AnsiballZ_file.py'
Dec 09 16:19:41 compute-0 sudo[221858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:41 compute-0 podman[221860]: 2025-12-09 16:19:41.121076743 +0000 UTC m=+0.037444286 container create 2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:19:41 compute-0 systemd[1]: Started libpod-conmon-2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25.scope.
Dec 09 16:19:41 compute-0 podman[221860]: 2025-12-09 16:19:41.105954573 +0000 UTC m=+0.022322146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:19:41 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7f2c543dbe140a2807f7effadbc58cf851e4d4e7729ac4e51190d65dd48a0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7f2c543dbe140a2807f7effadbc58cf851e4d4e7729ac4e51190d65dd48a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7f2c543dbe140a2807f7effadbc58cf851e4d4e7729ac4e51190d65dd48a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7f2c543dbe140a2807f7effadbc58cf851e4d4e7729ac4e51190d65dd48a0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:41 compute-0 podman[221860]: 2025-12-09 16:19:41.234708394 +0000 UTC m=+0.151076017 container init 2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:19:41 compute-0 podman[221860]: 2025-12-09 16:19:41.244584295 +0000 UTC m=+0.160951848 container start 2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:19:41 compute-0 podman[221860]: 2025-12-09 16:19:41.24863444 +0000 UTC m=+0.165002023 container attach 2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:19:41 compute-0 python3.9[221868]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:41 compute-0 sudo[221858]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]: {
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:     "0": [
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:         {
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "devices": [
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "/dev/loop3"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             ],
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_name": "ceph_lv0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_size": "21470642176",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "name": "ceph_lv0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "tags": {
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cluster_name": "ceph",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.crush_device_class": "",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.encrypted": "0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.objectstore": "bluestore",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osd_id": "0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.type": "block",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.vdo": "0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.with_tpm": "0"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             },
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "type": "block",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "vg_name": "ceph_vg0"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:         }
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:     ],
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:     "1": [
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:         {
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "devices": [
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "/dev/loop4"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             ],
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_name": "ceph_lv1",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_size": "21470642176",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "name": "ceph_lv1",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "tags": {
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cluster_name": "ceph",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.crush_device_class": "",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.encrypted": "0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.objectstore": "bluestore",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osd_id": "1",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.type": "block",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.vdo": "0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.with_tpm": "0"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             },
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "type": "block",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "vg_name": "ceph_vg1"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:         }
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:     ],
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:     "2": [
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:         {
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "devices": [
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "/dev/loop5"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             ],
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_name": "ceph_lv2",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_size": "21470642176",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "name": "ceph_lv2",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "tags": {
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.cluster_name": "ceph",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.crush_device_class": "",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.encrypted": "0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.objectstore": "bluestore",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osd_id": "2",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.type": "block",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.vdo": "0",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:                 "ceph.with_tpm": "0"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             },
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "type": "block",
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:             "vg_name": "ceph_vg2"
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:         }
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]:     ]
Dec 09 16:19:41 compute-0 sleepy_varahamihira[221879]: }
Dec 09 16:19:41 compute-0 systemd[1]: libpod-2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25.scope: Deactivated successfully.
Dec 09 16:19:41 compute-0 podman[221860]: 2025-12-09 16:19:41.585550599 +0000 UTC m=+0.501918142 container died 2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb7f2c543dbe140a2807f7effadbc58cf851e4d4e7729ac4e51190d65dd48a0e-merged.mount: Deactivated successfully.
Dec 09 16:19:41 compute-0 podman[221860]: 2025-12-09 16:19:41.631037723 +0000 UTC m=+0.547405296 container remove 2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_varahamihira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:19:41 compute-0 systemd[1]: libpod-conmon-2fc85aef40b1cec17776a3e5da69d698ad9d71b0f443d06a360b046d3bef0f25.scope: Deactivated successfully.
Dec 09 16:19:41 compute-0 ceph-mon[75222]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:41 compute-0 sudo[221671]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:41 compute-0 sudo[221999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:19:41 compute-0 sudo[221999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:41 compute-0 sudo[221999]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:41 compute-0 sudo[222048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:19:41 compute-0 sudo[222048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:41 compute-0 sudo[222099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbyhfdlhflqnlqzohrwvexcacqtijbeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297181.5091448-419-33049426968748/AnsiballZ_systemd.py'
Dec 09 16:19:41 compute-0 sudo[222099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:42 compute-0 podman[222113]: 2025-12-09 16:19:42.095671424 +0000 UTC m=+0.061155110 container create e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:19:42 compute-0 python3.9[222101]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:19:42 compute-0 systemd[1]: Reloading.
Dec 09 16:19:42 compute-0 podman[222113]: 2025-12-09 16:19:42.061853962 +0000 UTC m=+0.027337648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:19:42 compute-0 systemd-rc-local-generator[222157]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:19:42 compute-0 systemd-sysv-generator[222160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:19:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:42 compute-0 systemd[1]: Started libpod-conmon-e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba.scope.
Dec 09 16:19:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:19:42 compute-0 podman[222113]: 2025-12-09 16:19:42.499885416 +0000 UTC m=+0.465369082 container init e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:19:42 compute-0 podman[222113]: 2025-12-09 16:19:42.508267674 +0000 UTC m=+0.473751330 container start e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:19:42 compute-0 podman[222113]: 2025-12-09 16:19:42.512436173 +0000 UTC m=+0.477919829 container attach e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:19:42 compute-0 fervent_agnesi[222166]: 167 167
Dec 09 16:19:42 compute-0 podman[222113]: 2025-12-09 16:19:42.513774231 +0000 UTC m=+0.479257887 container died e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:19:42 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 16:19:42 compute-0 systemd[1]: libpod-e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba.scope: Deactivated successfully.
Dec 09 16:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1deed280e1bd9336f3787d7b4683e906e1f32da4c9224cea86c9ea23c85875d-merged.mount: Deactivated successfully.
Dec 09 16:19:42 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 16:19:42 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 16:19:42 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 16:19:42 compute-0 podman[222113]: 2025-12-09 16:19:42.559642985 +0000 UTC m=+0.525126651 container remove e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:19:42 compute-0 systemd[1]: libpod-conmon-e770240c9ac6232dfdde02f20c65c72157fe67535a10b07754a38f70e231f7ba.scope: Deactivated successfully.
Dec 09 16:19:42 compute-0 sudo[222099]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:42 compute-0 podman[222219]: 2025-12-09 16:19:42.719003406 +0000 UTC m=+0.039957817 container create 1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:19:42 compute-0 systemd[1]: Started libpod-conmon-1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6.scope.
Dec 09 16:19:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d8d84a3bb02a0ebd69f0be1d43720c6a990dddc4126bff29780615a7b4aa7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d8d84a3bb02a0ebd69f0be1d43720c6a990dddc4126bff29780615a7b4aa7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d8d84a3bb02a0ebd69f0be1d43720c6a990dddc4126bff29780615a7b4aa7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d8d84a3bb02a0ebd69f0be1d43720c6a990dddc4126bff29780615a7b4aa7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:19:42 compute-0 podman[222219]: 2025-12-09 16:19:42.70016147 +0000 UTC m=+0.021115911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:19:42 compute-0 podman[222219]: 2025-12-09 16:19:42.804747724 +0000 UTC m=+0.125702185 container init 1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_curie, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:19:42 compute-0 podman[222219]: 2025-12-09 16:19:42.816856348 +0000 UTC m=+0.137810799 container start 1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:19:42 compute-0 podman[222219]: 2025-12-09 16:19:42.821755227 +0000 UTC m=+0.142709768 container attach 1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:19:43 compute-0 sudo[222401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrklyxcuvzwxycmbdfhmtzsszxcltgya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297182.994812-429-79597559247200/AnsiballZ_file.py'
Dec 09 16:19:43 compute-0 sudo[222401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:43 compute-0 python3.9[222407]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:19:43 compute-0 sudo[222401]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:43 compute-0 lvm[222445]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:19:43 compute-0 lvm[222446]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:19:43 compute-0 lvm[222445]: VG ceph_vg0 finished
Dec 09 16:19:43 compute-0 lvm[222446]: VG ceph_vg1 finished
Dec 09 16:19:43 compute-0 lvm[222468]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:19:43 compute-0 lvm[222468]: VG ceph_vg2 finished
Dec 09 16:19:43 compute-0 reverent_curie[222235]: {}
Dec 09 16:19:43 compute-0 ceph-mon[75222]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:43 compute-0 systemd[1]: libpod-1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6.scope: Deactivated successfully.
Dec 09 16:19:43 compute-0 systemd[1]: libpod-1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6.scope: Consumed 1.321s CPU time.
Dec 09 16:19:43 compute-0 podman[222219]: 2025-12-09 16:19:43.683071317 +0000 UTC m=+1.004025748 container died 1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Dec 09 16:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-62d8d84a3bb02a0ebd69f0be1d43720c6a990dddc4126bff29780615a7b4aa7f-merged.mount: Deactivated successfully.
Dec 09 16:19:43 compute-0 podman[222219]: 2025-12-09 16:19:43.72713561 +0000 UTC m=+1.048090021 container remove 1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:19:43 compute-0 systemd[1]: libpod-conmon-1a102fb8375d12c62ef5df72f66ba7df4cba72a684ad5b83ca2f05d33596d8e6.scope: Deactivated successfully.
Dec 09 16:19:43 compute-0 sudo[222048]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:19:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:19:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:19:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:19:43 compute-0 sudo[222555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:19:43 compute-0 sudo[222555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:19:43 compute-0 sudo[222555]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:43 compute-0 sudo[222633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yczbjvpdqdaraaqozrvryxaixzciryej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297183.6829007-437-82549450987846/AnsiballZ_stat.py'
Dec 09 16:19:43 compute-0 sudo[222633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:44 compute-0 python3.9[222635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:44 compute-0 sudo[222633]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:44 compute-0 sudo[222756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwgomgyxdabbejqdnofnlbbizmapsrly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297183.6829007-437-82549450987846/AnsiballZ_copy.py'
Dec 09 16:19:44 compute-0 sudo[222756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:44 compute-0 python3.9[222758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765297183.6829007-437-82549450987846/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:19:44 compute-0 sudo[222756]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:19:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:19:45 compute-0 sudo[222908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmbmcsjdapyufqqrvggpqoexnmlvrrqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297185.1629748-454-237528317450734/AnsiballZ_file.py'
Dec 09 16:19:45 compute-0 sudo[222908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:45 compute-0 python3.9[222910]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:19:45 compute-0 sudo[222908]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:45 compute-0 ceph-mon[75222]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:46 compute-0 sudo[223060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfntfakoksxcuoajmnmrhsnkdhcrxxwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297185.952841-462-279887172017043/AnsiballZ_stat.py'
Dec 09 16:19:46 compute-0 sudo[223060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:46 compute-0 python3.9[223062]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:19:46 compute-0 sudo[223060]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:46 compute-0 sudo[223183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrnyrtlygbnegzlafgntpbhnjstbmnyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297185.952841-462-279887172017043/AnsiballZ_copy.py'
Dec 09 16:19:46 compute-0 sudo[223183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:47 compute-0 python3.9[223185]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297185.952841-462-279887172017043/.source.json _original_basename=.q8oolo5l follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:47 compute-0 sudo[223183]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:47 compute-0 sudo[223354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmjhgsmiinjcicdbgcalesocamlbjqgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297187.3090858-477-29251608325162/AnsiballZ_file.py'
Dec 09 16:19:47 compute-0 sudo[223354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:47 compute-0 podman[223289]: 2025-12-09 16:19:47.684751727 +0000 UTC m=+0.117926054 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 09 16:19:47 compute-0 ceph-mon[75222]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:47 compute-0 python3.9[223360]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:47 compute-0 sudo[223354]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:48 compute-0 sudo[223512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmnletzhfknwbcijvwzseetdwzfclhrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297188.131894-485-250936118447105/AnsiballZ_stat.py'
Dec 09 16:19:48 compute-0 sudo[223512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:48 compute-0 sudo[223512]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:49 compute-0 sudo[223635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxdazivgnjvioodarjusufiuqsullfqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297188.131894-485-250936118447105/AnsiballZ_copy.py'
Dec 09 16:19:49 compute-0 sudo[223635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:49 compute-0 sudo[223635]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:49 compute-0 ceph-mon[75222]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:49 compute-0 sudo[223804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wetzfovqwcklrjzdmwktahkdupjngvbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297189.5170753-502-111714935353712/AnsiballZ_container_config_data.py'
Dec 09 16:19:49 compute-0 sudo[223804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:50 compute-0 podman[223761]: 2025-12-09 16:19:50.021067834 +0000 UTC m=+0.077253327 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 09 16:19:50 compute-0 python3.9[223808]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 09 16:19:50 compute-0 sudo[223804]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:50 compute-0 sudo[223959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfemashohzfqiujdjbvutsnxwmzbeavi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297190.4882452-511-191930141758733/AnsiballZ_container_config_hash.py'
Dec 09 16:19:50 compute-0 sudo[223959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:51 compute-0 python3.9[223961]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 16:19:51 compute-0 sudo[223959]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:51 compute-0 ceph-mon[75222]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:51 compute-0 sudo[224111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsmiitukxlrpficwjrzmemqobeshoxpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297191.4511106-520-87249917020790/AnsiballZ_podman_container_info.py'
Dec 09 16:19:51 compute-0 sudo[224111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:52 compute-0 python3.9[224113]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 09 16:19:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:52 compute-0 sudo[224111]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:53 compute-0 sudo[224290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxvnnakergvwbngfkkbifiliturwbinx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765297193.2680378-533-122170297101514/AnsiballZ_edpm_container_manage.py'
Dec 09 16:19:53 compute-0 sudo[224290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:53 compute-0 ceph-mon[75222]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:54 compute-0 python3[224292]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 16:19:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:55 compute-0 podman[224303]: 2025-12-09 16:19:55.403230745 +0000 UTC m=+1.217080766 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 09 16:19:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:19:55 compute-0 podman[224358]: 2025-12-09 16:19:55.522597949 +0000 UTC m=+0.045344400 container create 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 09 16:19:55 compute-0 podman[224358]: 2025-12-09 16:19:55.495934231 +0000 UTC m=+0.018680702 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 09 16:19:55 compute-0 python3[224292]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 09 16:19:55 compute-0 sudo[224290]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:55 compute-0 ceph-mon[75222]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:56 compute-0 sudo[224544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aephjtuphqmehhvwzftyptbibvysjtof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297195.8628893-541-231124144144861/AnsiballZ_stat.py'
Dec 09 16:19:56 compute-0 sudo[224544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:56 compute-0 python3.9[224546]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:19:56 compute-0 sudo[224544]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:19:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:19:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:19:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:19:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:19:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:19:57 compute-0 sudo[224698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsswfgiwssektrupbxtlkkyownykawps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297196.7484162-550-52083934355749/AnsiballZ_file.py'
Dec 09 16:19:57 compute-0 sudo[224698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:57 compute-0 python3.9[224700]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:57 compute-0 sudo[224698]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:57 compute-0 sudo[224774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umarxaighfqnhyzhvtypznublhsvtihe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297196.7484162-550-52083934355749/AnsiballZ_stat.py'
Dec 09 16:19:57 compute-0 sudo[224774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:57 compute-0 python3.9[224776]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:19:57 compute-0 sudo[224774]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:57 compute-0 ceph-mon[75222]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:58 compute-0 sudo[224925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgnvziiqswtalwgnzhjbxtixpmiaafpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297197.7727628-550-203631402150763/AnsiballZ_copy.py'
Dec 09 16:19:58 compute-0 sudo[224925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:58 compute-0 python3.9[224927]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765297197.7727628-550-203631402150763/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:19:58 compute-0 sudo[224925]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:58 compute-0 sudo[225001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwuiqdmmhnkdanueniknezmfdgkvzkha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297197.7727628-550-203631402150763/AnsiballZ_systemd.py'
Dec 09 16:19:58 compute-0 sudo[225001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:58 compute-0 ceph-mon[75222]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:19:59 compute-0 python3.9[225003]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 16:19:59 compute-0 systemd[1]: Reloading.
Dec 09 16:19:59 compute-0 systemd-rc-local-generator[225028]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:19:59 compute-0 systemd-sysv-generator[225032]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:19:59 compute-0 sudo[225001]: pam_unix(sudo:session): session closed for user root
Dec 09 16:19:59 compute-0 sudo[225112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzrhhkbfkpcwxcjxrsciprrcdwyrkoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297197.7727628-550-203631402150763/AnsiballZ_systemd.py'
Dec 09 16:19:59 compute-0 sudo[225112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:19:59 compute-0 python3.9[225114]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:00 compute-0 systemd[1]: Reloading.
Dec 09 16:20:00 compute-0 systemd-rc-local-generator[225144]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:20:00 compute-0 systemd-sysv-generator[225147]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:20:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:00 compute-0 systemd[1]: Starting multipathd container...
Dec 09 16:20:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a8b433bff4c349f902ba21e1dfd966e575379a119782d3abbca749ebd42cd4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a8b433bff4c349f902ba21e1dfd966e575379a119782d3abbca749ebd42cd4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2.
Dec 09 16:20:00 compute-0 podman[225154]: 2025-12-09 16:20:00.456963357 +0000 UTC m=+0.123928415 container init 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 16:20:00 compute-0 multipathd[225169]: + sudo -E kolla_set_configs
Dec 09 16:20:00 compute-0 podman[225154]: 2025-12-09 16:20:00.484611313 +0000 UTC m=+0.151576411 container start 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec 09 16:20:00 compute-0 podman[225154]: multipathd
Dec 09 16:20:00 compute-0 sudo[225175]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 09 16:20:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:00 compute-0 systemd[1]: Started multipathd container.
Dec 09 16:20:00 compute-0 sudo[225175]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 16:20:00 compute-0 sudo[225175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 09 16:20:00 compute-0 sudo[225112]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:00 compute-0 multipathd[225169]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 16:20:00 compute-0 multipathd[225169]: INFO:__main__:Validating config file
Dec 09 16:20:00 compute-0 multipathd[225169]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 16:20:00 compute-0 multipathd[225169]: INFO:__main__:Writing out command to execute
Dec 09 16:20:00 compute-0 sudo[225175]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:00 compute-0 multipathd[225169]: ++ cat /run_command
Dec 09 16:20:00 compute-0 multipathd[225169]: + CMD='/usr/sbin/multipathd -d'
Dec 09 16:20:00 compute-0 multipathd[225169]: + ARGS=
Dec 09 16:20:00 compute-0 multipathd[225169]: + sudo kolla_copy_cacerts
Dec 09 16:20:00 compute-0 sudo[225193]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 09 16:20:00 compute-0 sudo[225193]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 16:20:00 compute-0 sudo[225193]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 09 16:20:00 compute-0 sudo[225193]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:00 compute-0 multipathd[225169]: + [[ ! -n '' ]]
Dec 09 16:20:00 compute-0 multipathd[225169]: + . kolla_extend_start
Dec 09 16:20:00 compute-0 multipathd[225169]: Running command: '/usr/sbin/multipathd -d'
Dec 09 16:20:00 compute-0 multipathd[225169]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 09 16:20:00 compute-0 multipathd[225169]: + umask 0022
Dec 09 16:20:00 compute-0 multipathd[225169]: + exec /usr/sbin/multipathd -d
Dec 09 16:20:00 compute-0 podman[225176]: 2025-12-09 16:20:00.579697026 +0000 UTC m=+0.076817095 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 09 16:20:00 compute-0 multipathd[225169]: 5308.225122 | --------start up--------
Dec 09 16:20:00 compute-0 multipathd[225169]: 5308.225143 | read /etc/multipath.conf
Dec 09 16:20:00 compute-0 systemd[1]: 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2-857d6f6985053b9.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 16:20:00 compute-0 systemd[1]: 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2-857d6f6985053b9.service: Failed with result 'exit-code'.
Dec 09 16:20:00 compute-0 multipathd[225169]: 5308.232363 | path checkers start up
Dec 09 16:20:01 compute-0 python3.9[225358]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:20:01 compute-0 ceph-mon[75222]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:01 compute-0 anacron[4310]: Job `cron.monthly' started
Dec 09 16:20:01 compute-0 anacron[4310]: Job `cron.monthly' terminated
Dec 09 16:20:01 compute-0 anacron[4310]: Normal exit (3 jobs run)
Dec 09 16:20:01 compute-0 sudo[225512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuopsggpsiorhqmljmxddmwrievyilld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297201.5183427-586-2543451511936/AnsiballZ_command.py'
Dec 09 16:20:01 compute-0 sudo[225512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:02 compute-0 python3.9[225514]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:02 compute-0 sudo[225512]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:02 compute-0 sudo[225677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlectwegmkyudzzmekhavowaedmkqhlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297202.2942367-594-39690262465491/AnsiballZ_systemd.py'
Dec 09 16:20:02 compute-0 sudo[225677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:02 compute-0 python3.9[225679]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:20:02 compute-0 systemd[1]: Stopping multipathd container...
Dec 09 16:20:03 compute-0 multipathd[225169]: 5310.964352 | exit (signal)
Dec 09 16:20:03 compute-0 multipathd[225169]: 5310.964664 | --------shut down-------
Dec 09 16:20:03 compute-0 ceph-mon[75222]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:03 compute-0 systemd[1]: libpod-84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2.scope: Deactivated successfully.
Dec 09 16:20:03 compute-0 podman[225683]: 2025-12-09 16:20:03.354019639 +0000 UTC m=+0.365356939 container died 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 09 16:20:03 compute-0 systemd[1]: 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2-857d6f6985053b9.timer: Deactivated successfully.
Dec 09 16:20:03 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2.
Dec 09 16:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2-userdata-shm.mount: Deactivated successfully.
Dec 09 16:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-14a8b433bff4c349f902ba21e1dfd966e575379a119782d3abbca749ebd42cd4-merged.mount: Deactivated successfully.
Dec 09 16:20:03 compute-0 podman[225683]: 2025-12-09 16:20:03.655592163 +0000 UTC m=+0.666929453 container cleanup 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:20:03 compute-0 podman[225683]: multipathd
Dec 09 16:20:03 compute-0 podman[225710]: multipathd
Dec 09 16:20:03 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 09 16:20:03 compute-0 systemd[1]: Stopped multipathd container.
Dec 09 16:20:03 compute-0 systemd[1]: Starting multipathd container...
Dec 09 16:20:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a8b433bff4c349f902ba21e1dfd966e575379a119782d3abbca749ebd42cd4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a8b433bff4c349f902ba21e1dfd966e575379a119782d3abbca749ebd42cd4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2.
Dec 09 16:20:03 compute-0 podman[225723]: 2025-12-09 16:20:03.866231271 +0000 UTC m=+0.107230149 container init 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 09 16:20:03 compute-0 multipathd[225738]: + sudo -E kolla_set_configs
Dec 09 16:20:03 compute-0 podman[225723]: 2025-12-09 16:20:03.888617298 +0000 UTC m=+0.129616186 container start 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:20:03 compute-0 sudo[225744]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 09 16:20:03 compute-0 sudo[225744]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 16:20:03 compute-0 sudo[225744]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 09 16:20:03 compute-0 podman[225723]: multipathd
Dec 09 16:20:03 compute-0 systemd[1]: Started multipathd container.
Dec 09 16:20:03 compute-0 sudo[225677]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:03 compute-0 multipathd[225738]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 16:20:03 compute-0 multipathd[225738]: INFO:__main__:Validating config file
Dec 09 16:20:03 compute-0 multipathd[225738]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 16:20:03 compute-0 multipathd[225738]: INFO:__main__:Writing out command to execute
Dec 09 16:20:03 compute-0 sudo[225744]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:03 compute-0 multipathd[225738]: ++ cat /run_command
Dec 09 16:20:03 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 09 16:20:03 compute-0 multipathd[225738]: + CMD='/usr/sbin/multipathd -d'
Dec 09 16:20:03 compute-0 multipathd[225738]: + ARGS=
Dec 09 16:20:03 compute-0 multipathd[225738]: + sudo kolla_copy_cacerts
Dec 09 16:20:03 compute-0 podman[225745]: 2025-12-09 16:20:03.971324599 +0000 UTC m=+0.074628232 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:20:03 compute-0 sudo[225776]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 09 16:20:03 compute-0 sudo[225776]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 16:20:03 compute-0 sudo[225776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 09 16:20:03 compute-0 systemd[1]: 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2-c203cf0939cbbd6.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 16:20:03 compute-0 systemd[1]: 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2-c203cf0939cbbd6.service: Failed with result 'exit-code'.
Dec 09 16:20:03 compute-0 sudo[225776]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:03 compute-0 multipathd[225738]: + [[ ! -n '' ]]
Dec 09 16:20:03 compute-0 multipathd[225738]: + . kolla_extend_start
Dec 09 16:20:03 compute-0 multipathd[225738]: Running command: '/usr/sbin/multipathd -d'
Dec 09 16:20:03 compute-0 multipathd[225738]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 09 16:20:03 compute-0 multipathd[225738]: + umask 0022
Dec 09 16:20:03 compute-0 multipathd[225738]: + exec /usr/sbin/multipathd -d
Dec 09 16:20:03 compute-0 multipathd[225738]: 5311.636392 | --------start up--------
Dec 09 16:20:03 compute-0 multipathd[225738]: 5311.636418 | read /etc/multipath.conf
Dec 09 16:20:04 compute-0 multipathd[225738]: 5311.642660 | path checkers start up
Dec 09 16:20:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:04 compute-0 sudo[225930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpcyxkiazvlkzgiymhccqykgvwraarzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297204.1142986-602-143651512459043/AnsiballZ_file.py'
Dec 09 16:20:04 compute-0 sudo[225930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:04 compute-0 python3.9[225932]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:04 compute-0 sudo[225930]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:05 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 09 16:20:05 compute-0 ceph-mon[75222]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:05 compute-0 sudo[226083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lieqzjqjxlwkiruzruicbzhjwcnorbpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297205.0309143-614-52672227823589/AnsiballZ_file.py'
Dec 09 16:20:05 compute-0 sudo[226083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:05 compute-0 python3.9[226085]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 09 16:20:05 compute-0 sudo[226083]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:06 compute-0 sudo[226235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gazcakwvmmulxrrqvbektyitdfsgprtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297205.8838863-622-256689217626582/AnsiballZ_modprobe.py'
Dec 09 16:20:06 compute-0 sudo[226235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:06 compute-0 python3.9[226237]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 09 16:20:06 compute-0 kernel: Key type psk registered
Dec 09 16:20:06 compute-0 sudo[226235]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:06 compute-0 sudo[226400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umgtkxntyahmydcuzzwsjicbjkpmkrhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297206.6923206-630-134835376064703/AnsiballZ_stat.py'
Dec 09 16:20:06 compute-0 sudo[226400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:07 compute-0 python3.9[226402]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:20:07 compute-0 sudo[226400]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:07 compute-0 ceph-mon[75222]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:07 compute-0 sudo[226523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aflhaxgpfupjuamtxqdrldoombfergph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297206.6923206-630-134835376064703/AnsiballZ_copy.py'
Dec 09 16:20:07 compute-0 sudo[226523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:07 compute-0 python3.9[226525]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765297206.6923206-630-134835376064703/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:07 compute-0 sudo[226523]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:08 compute-0 sudo[226675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uinmgqncxanfpvkjnvnkjagygzxtexup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297207.9881308-646-160247057085340/AnsiballZ_lineinfile.py'
Dec 09 16:20:08 compute-0 sudo[226675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:08 compute-0 python3.9[226677]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:08 compute-0 sudo[226675]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:08 compute-0 sudo[226827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssplmfihxdmdabekxtlpekjuvdxfjbox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297208.6945324-654-26420073461150/AnsiballZ_systemd.py'
Dec 09 16:20:08 compute-0 sudo[226827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:09 compute-0 python3.9[226829]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:20:09 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 09 16:20:09 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 09 16:20:09 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 09 16:20:09 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 09 16:20:09 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 09 16:20:09 compute-0 sudo[226827]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:09 compute-0 ceph-mon[75222]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:09 compute-0 sudo[226983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ickbntwjsgfaoaqzvdoaqlufxfguiwpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297209.6297362-662-189406174242748/AnsiballZ_dnf.py'
Dec 09 16:20:09 compute-0 sudo[226983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:10 compute-0 python3.9[226985]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 16:20:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:11 compute-0 ceph-mon[75222]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:12 compute-0 systemd[1]: Reloading.
Dec 09 16:20:12 compute-0 systemd-sysv-generator[227019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:20:12 compute-0 systemd-rc-local-generator[227016]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:20:13 compute-0 systemd[1]: Reloading.
Dec 09 16:20:13 compute-0 systemd-rc-local-generator[227054]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:20:13 compute-0 systemd-sysv-generator[227058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:20:13 compute-0 ceph-mon[75222]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:13 compute-0 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 09 16:20:13 compute-0 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 09 16:20:13 compute-0 lvm[227100]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:20:13 compute-0 lvm[227099]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:20:13 compute-0 lvm[227099]: VG ceph_vg2 finished
Dec 09 16:20:13 compute-0 lvm[227100]: VG ceph_vg0 finished
Dec 09 16:20:13 compute-0 lvm[227101]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:20:13 compute-0 lvm[227101]: VG ceph_vg1 finished
Dec 09 16:20:13 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 16:20:13 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 16:20:13 compute-0 systemd[1]: Reloading.
Dec 09 16:20:13 compute-0 systemd-rc-local-generator[227154]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:20:13 compute-0 systemd-sysv-generator[227157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:20:14 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 16:20:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:14 compute-0 sudo[226983]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 16:20:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 16:20:15 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.504s CPU time.
Dec 09 16:20:15 compute-0 systemd[1]: run-r86f7152ad71c4fc19dae7e157c7dd9e6.service: Deactivated successfully.
Dec 09 16:20:15 compute-0 sudo[228453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrpblibbgmntrsokdxlyitnnvgzadsbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297214.7852483-670-44382030534150/AnsiballZ_systemd_service.py'
Dec 09 16:20:15 compute-0 sudo[228453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:15 compute-0 ceph-mon[75222]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:15 compute-0 python3.9[228455]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:20:15 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 09 16:20:15 compute-0 iscsid[216164]: iscsid shutting down.
Dec 09 16:20:15 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 09 16:20:15 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 09 16:20:15 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 09 16:20:15 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 09 16:20:15 compute-0 systemd[1]: Started Open-iSCSI.
Dec 09 16:20:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:15 compute-0 sudo[228453]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:15 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 09 16:20:15 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 09 16:20:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:16 compute-0 python3.9[228611]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 16:20:17 compute-0 sudo[228765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stcvgpautlpjqskmwmhekjprdbdmrmdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297216.8345513-688-216585336610234/AnsiballZ_file.py'
Dec 09 16:20:17 compute-0 sudo[228765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:17 compute-0 python3.9[228767]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:17 compute-0 sudo[228765]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:17 compute-0 ceph-mon[75222]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:20:17.837 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:20:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:20:17.838 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:20:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:20:17.838 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:20:18 compute-0 sudo[228930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqxoneccgyoeiqucmoggirxayrdmzibi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297217.7058048-699-222879738843716/AnsiballZ_systemd_service.py'
Dec 09 16:20:18 compute-0 sudo[228930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:18 compute-0 podman[228891]: 2025-12-09 16:20:18.098474945 +0000 UTC m=+0.096962838 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:20:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec 09 16:20:18 compute-0 python3.9[228938]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 16:20:18 compute-0 systemd[1]: Reloading.
Dec 09 16:20:18 compute-0 systemd-rc-local-generator[228973]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:20:18 compute-0 systemd-sysv-generator[228977]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:20:18 compute-0 sudo[228930]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:19 compute-0 python3.9[229130]: ansible-ansible.builtin.service_facts Invoked
Dec 09 16:20:19 compute-0 network[229147]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 16:20:19 compute-0 network[229148]: 'network-scripts' will be removed from distribution in near future.
Dec 09 16:20:19 compute-0 network[229149]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 16:20:19 compute-0 ceph-mon[75222]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec 09 16:20:20 compute-0 podman[229156]: 2025-12-09 16:20:20.25354898 +0000 UTC m=+0.054624635 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 09 16:20:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:21 compute-0 ceph-mon[75222]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:23 compute-0 sudo[229439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oajgemgokeovzuqwzzfxxcocnqylkhzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297222.9201233-718-170440628871267/AnsiballZ_systemd_service.py'
Dec 09 16:20:23 compute-0 sudo[229439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:23 compute-0 ceph-mon[75222]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:23 compute-0 python3.9[229441]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:23 compute-0 sudo[229439]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:24 compute-0 sudo[229594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjoliocuwpxhiisrmglnikaugttivgzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297223.7490425-718-271832121587434/AnsiballZ_systemd_service.py'
Dec 09 16:20:24 compute-0 sudo[229594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:24 compute-0 sshd-session[229542]: Invalid user odoo from 146.190.31.45 port 38848
Dec 09 16:20:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:24 compute-0 sshd-session[229542]: Connection closed by invalid user odoo 146.190.31.45 port 38848 [preauth]
Dec 09 16:20:24 compute-0 python3.9[229596]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:24 compute-0 sudo[229594]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:24 compute-0 sudo[229747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwelxkowlsthbcppfqqmprazkskbfokm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297224.5723271-718-187596889394335/AnsiballZ_systemd_service.py'
Dec 09 16:20:24 compute-0 sudo[229747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:25 compute-0 python3.9[229749]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:25 compute-0 sudo[229747]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:25 compute-0 ceph-mon[75222]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:25 compute-0 sudo[229900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gunvhueipgblpyjycpwcivwvlnfextwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297225.3834498-718-144807347950941/AnsiballZ_systemd_service.py'
Dec 09 16:20:25 compute-0 sudo[229900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:20:25
Dec 09 16:20:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:20:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:20:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec 09 16:20:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:20:26 compute-0 python3.9[229902]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:26 compute-0 sudo[229900]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:20:26 compute-0 sudo[230053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whxqhextplmdzooenlycehjtgbwnrcbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297226.2329035-718-147952943103458/AnsiballZ_systemd_service.py'
Dec 09 16:20:26 compute-0 sudo[230053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:20:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:20:26 compute-0 python3.9[230055]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:26 compute-0 sudo[230053]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:27 compute-0 sudo[230206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusmucosgjdlyjbghibugvmmvsiikwii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297227.0648637-718-58667603917465/AnsiballZ_systemd_service.py'
Dec 09 16:20:27 compute-0 sudo[230206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:27 compute-0 ceph-mon[75222]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:27 compute-0 python3.9[230208]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:27 compute-0 sudo[230206]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:28 compute-0 sudo[230359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjbxqkyblsclqwdhmvdbbvucowkxwlck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297227.8131073-718-211411542600273/AnsiballZ_systemd_service.py'
Dec 09 16:20:28 compute-0 sudo[230359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:28 compute-0 python3.9[230361]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:28 compute-0 sudo[230359]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:28 compute-0 sudo[230512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkiaolevkpdzblqpwpfwursttmnwvjik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297228.6179106-718-148811406299741/AnsiballZ_systemd_service.py'
Dec 09 16:20:28 compute-0 sudo[230512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:29 compute-0 python3.9[230514]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:20:29 compute-0 sudo[230512]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:29 compute-0 ceph-mon[75222]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 09 16:20:29 compute-0 sudo[230665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfgldiiwhledvzijjdifryawszoafhmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297229.6741292-777-139075603451901/AnsiballZ_file.py'
Dec 09 16:20:29 compute-0 sudo[230665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:30 compute-0 python3.9[230667]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:30 compute-0 sudo[230665]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 09 16:20:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:30 compute-0 sudo[230817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhivfhgijcybuoqncxwrpemuxmnidibu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297230.342129-777-122665766451158/AnsiballZ_file.py'
Dec 09 16:20:30 compute-0 sudo[230817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:30 compute-0 python3.9[230819]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:30 compute-0 sudo[230817]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:31 compute-0 sudo[230969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcyxaqbpklxernnoezwlrkxuyryeomtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297231.0693412-777-76154026757082/AnsiballZ_file.py'
Dec 09 16:20:31 compute-0 sudo[230969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:31 compute-0 ceph-mon[75222]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 09 16:20:31 compute-0 python3.9[230971]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:31 compute-0 sudo[230969]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:31 compute-0 sudo[231121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihbubhhtszfvmkfshxrkkkinevdeeqgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297231.7132988-777-123750807209731/AnsiballZ_file.py'
Dec 09 16:20:31 compute-0 sudo[231121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:32 compute-0 python3.9[231123]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:32 compute-0 sudo[231121]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:32 compute-0 sudo[231273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijjyaqghqefulkrlpqfxlnalcypldhrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297232.318509-777-240983273003274/AnsiballZ_file.py'
Dec 09 16:20:32 compute-0 sudo[231273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:32 compute-0 python3.9[231275]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:32 compute-0 sudo[231273]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:33 compute-0 sudo[231425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qywuyhfjbkffczjnxuuotwgwlsyeglqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297233.0508268-777-162818315520669/AnsiballZ_file.py'
Dec 09 16:20:33 compute-0 sudo[231425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:33 compute-0 ceph-mon[75222]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:33 compute-0 python3.9[231427]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:33 compute-0 sudo[231425]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:34 compute-0 sudo[231594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvhvpswwueninqubwyjaqcgbffspweai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297233.8348083-777-98196142195618/AnsiballZ_file.py'
Dec 09 16:20:34 compute-0 sudo[231594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:34 compute-0 podman[231551]: 2025-12-09 16:20:34.17590199 +0000 UTC m=+0.065118281 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:20:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:34 compute-0 python3.9[231598]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:34 compute-0 sudo[231594]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:34 compute-0 sudo[231749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngvyzkdhyyyzkcdllabmvmpvssfypghg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297234.5262568-777-122858227093157/AnsiballZ_file.py'
Dec 09 16:20:34 compute-0 sudo[231749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:35 compute-0 python3.9[231751]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:35 compute-0 sudo[231749]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.506186) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297235506237, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1828, "num_deletes": 250, "total_data_size": 3126248, "memory_usage": 3165496, "flush_reason": "Manual Compaction"}
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297235517940, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1758922, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11764, "largest_seqno": 13591, "table_properties": {"data_size": 1753008, "index_size": 2989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14826, "raw_average_key_size": 20, "raw_value_size": 1739900, "raw_average_value_size": 2360, "num_data_blocks": 138, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765297025, "oldest_key_time": 1765297025, "file_creation_time": 1765297235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11798 microseconds, and 5573 cpu microseconds.
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:20:35 compute-0 ceph-mon[75222]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.517991) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1758922 bytes OK
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.518007) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.519789) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.519817) EVENT_LOG_v1 {"time_micros": 1765297235519810, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.519852) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3118530, prev total WAL file size 3119685, number of live WAL files 2.
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.521439) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1717KB)], [29(7910KB)]
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297235521791, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9859467, "oldest_snapshot_seqno": -1}
Dec 09 16:20:35 compute-0 sudo[231901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmroswaihecnwgjrdqzwjvlbwligyoch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297235.259552-834-34436709455606/AnsiballZ_file.py'
Dec 09 16:20:35 compute-0 sudo[231901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4034 keys, 7836083 bytes, temperature: kUnknown
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297235574609, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7836083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7807050, "index_size": 17851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95960, "raw_average_key_size": 23, "raw_value_size": 7732281, "raw_average_value_size": 1916, "num_data_blocks": 775, "num_entries": 4034, "num_filter_entries": 4034, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765297235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.574889) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7836083 bytes
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.576227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.4 rd, 148.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(10.1) write-amplify(4.5) OK, records in: 4447, records dropped: 413 output_compression: NoCompression
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.576249) EVENT_LOG_v1 {"time_micros": 1765297235576239, "job": 12, "event": "compaction_finished", "compaction_time_micros": 52885, "compaction_time_cpu_micros": 25158, "output_level": 6, "num_output_files": 1, "total_output_size": 7836083, "num_input_records": 4447, "num_output_records": 4034, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297235576640, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297235578353, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.521324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.578409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.578414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.578416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.578418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:20:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:20:35.578420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:20:35 compute-0 python3.9[231903]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:35 compute-0 sudo[231901]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:36 compute-0 sudo[232053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdvrhuqlrafxkwtgjzeijbbaiwcijbse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297235.9213848-834-93162394806099/AnsiballZ_file.py'
Dec 09 16:20:36 compute-0 sudo[232053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:36 compute-0 python3.9[232055]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:36 compute-0 sudo[232053]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:20:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:20:36 compute-0 sudo[232205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwkmuxeurhyqmvzgeqwpccrqyagrdbms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297236.6602418-834-108132887147387/AnsiballZ_file.py'
Dec 09 16:20:36 compute-0 sudo[232205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:37 compute-0 python3.9[232207]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:37 compute-0 sudo[232205]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:37 compute-0 ceph-mon[75222]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:37 compute-0 sudo[232357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwqtbufffbvgfcwjncglpmobfuecihyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297237.2963061-834-207013525961991/AnsiballZ_file.py'
Dec 09 16:20:37 compute-0 sudo[232357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:37 compute-0 python3.9[232359]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:37 compute-0 sudo[232357]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:38 compute-0 sudo[232509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcdoaecotxcwhzzntfvvktlyivgmikmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297238.0169694-834-141102252940216/AnsiballZ_file.py'
Dec 09 16:20:38 compute-0 sudo[232509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:38 compute-0 python3.9[232511]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:38 compute-0 sudo[232509]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:38 compute-0 sudo[232661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfvbujggnnteyljxzcudiwvyutgqxvir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297238.6585143-834-56032827019553/AnsiballZ_file.py'
Dec 09 16:20:38 compute-0 sudo[232661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:39 compute-0 python3.9[232663]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:39 compute-0 sudo[232661]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:39 compute-0 ceph-mon[75222]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:39 compute-0 sudo[232813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rojzhksezdwjenxcyvitaiwbaeyleghz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297239.326932-834-69032000901761/AnsiballZ_file.py'
Dec 09 16:20:39 compute-0 sudo[232813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:39 compute-0 python3.9[232815]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:39 compute-0 sudo[232813]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:40 compute-0 sudo[232965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzcikszwrrmbpooevrcdagomhllkfhjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297239.9818013-834-84341888444082/AnsiballZ_file.py'
Dec 09 16:20:40 compute-0 sudo[232965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:40 compute-0 python3.9[232967]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:20:40 compute-0 sudo[232965]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:41 compute-0 sudo[233117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llkxppdigccmzlxbtkqsdpiwfbzwuaja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297240.7982666-892-159858211287966/AnsiballZ_command.py'
Dec 09 16:20:41 compute-0 sudo[233117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:41 compute-0 python3.9[233119]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:41 compute-0 sudo[233117]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:41 compute-0 ceph-mon[75222]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:42 compute-0 python3.9[233271]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 16:20:42 compute-0 sudo[233421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaoawlmzjminyyojhgubwvdqzuohitzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297242.604596-910-171102686968895/AnsiballZ_systemd_service.py'
Dec 09 16:20:42 compute-0 sudo[233421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:43 compute-0 python3.9[233423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 16:20:43 compute-0 systemd[1]: Reloading.
Dec 09 16:20:43 compute-0 systemd-sysv-generator[233451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:20:43 compute-0 systemd-rc-local-generator[233445]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:20:43 compute-0 ceph-mon[75222]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:43 compute-0 sudo[233421]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:43 compute-0 sudo[233483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:20:43 compute-0 sudo[233483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:43 compute-0 sudo[233483]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:43 compute-0 sudo[233508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:20:43 compute-0 sudo[233508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:44 compute-0 sudo[233673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbrrgmrdevxbcmxouwalozjuctvitirx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297243.9872103-918-273891911207749/AnsiballZ_command.py'
Dec 09 16:20:44 compute-0 sudo[233673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:44 compute-0 python3.9[233675]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:44 compute-0 sudo[233673]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:44 compute-0 sudo[233508]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:20:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:20:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:20:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:20:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:20:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:20:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:20:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:20:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:20:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:20:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:20:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:20:44 compute-0 sudo[233770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:20:44 compute-0 sudo[233770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:44 compute-0 sudo[233770]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:20:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:20:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:20:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:20:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:20:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:20:44 compute-0 sudo[233818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:20:44 compute-0 sudo[233818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:44 compute-0 sudo[233893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhqcmorwqpnwessvydqastffdtftkytn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297244.6071522-918-77025419073214/AnsiballZ_command.py'
Dec 09 16:20:44 compute-0 sudo[233893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:45 compute-0 python3.9[233895]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:45 compute-0 sudo[233893]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:45 compute-0 podman[233907]: 2025-12-09 16:20:45.099917775 +0000 UTC m=+0.046793640 container create 7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:20:45 compute-0 systemd[1]: Started libpod-conmon-7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314.scope.
Dec 09 16:20:45 compute-0 podman[233907]: 2025-12-09 16:20:45.077795867 +0000 UTC m=+0.024671762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:20:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:20:45 compute-0 podman[233907]: 2025-12-09 16:20:45.204531547 +0000 UTC m=+0.151407492 container init 7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:20:45 compute-0 podman[233907]: 2025-12-09 16:20:45.211004521 +0000 UTC m=+0.157880386 container start 7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:20:45 compute-0 podman[233907]: 2025-12-09 16:20:45.214309685 +0000 UTC m=+0.161185550 container attach 7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:20:45 compute-0 xenodochial_moser[233946]: 167 167
Dec 09 16:20:45 compute-0 systemd[1]: libpod-7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314.scope: Deactivated successfully.
Dec 09 16:20:45 compute-0 podman[233907]: 2025-12-09 16:20:45.218253717 +0000 UTC m=+0.165129582 container died 7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed8283f8aa25f493ed0df1df6c36a607bb94202567d7843d79c10e4eca91a61d-merged.mount: Deactivated successfully.
Dec 09 16:20:45 compute-0 podman[233907]: 2025-12-09 16:20:45.257854842 +0000 UTC m=+0.204730707 container remove 7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:20:45 compute-0 systemd[1]: libpod-conmon-7f4339f2565b716540743c2edbb05a91836878fdea163e99077d9808fb6d1314.scope: Deactivated successfully.
Dec 09 16:20:45 compute-0 podman[234046]: 2025-12-09 16:20:45.419400811 +0000 UTC m=+0.035769607 container create 0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:20:45 compute-0 systemd[1]: Started libpod-conmon-0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96.scope.
Dec 09 16:20:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452f0988ccf9bb3391e414a19862541f670233671326cf147ffc890961a0b281/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:45 compute-0 podman[234046]: 2025-12-09 16:20:45.404798057 +0000 UTC m=+0.021166883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452f0988ccf9bb3391e414a19862541f670233671326cf147ffc890961a0b281/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452f0988ccf9bb3391e414a19862541f670233671326cf147ffc890961a0b281/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452f0988ccf9bb3391e414a19862541f670233671326cf147ffc890961a0b281/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452f0988ccf9bb3391e414a19862541f670233671326cf147ffc890961a0b281/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:45 compute-0 podman[234046]: 2025-12-09 16:20:45.514155693 +0000 UTC m=+0.130524519 container init 0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_raman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:20:45 compute-0 podman[234046]: 2025-12-09 16:20:45.521225604 +0000 UTC m=+0.137594410 container start 0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_raman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:20:45 compute-0 podman[234046]: 2025-12-09 16:20:45.524925709 +0000 UTC m=+0.141294525 container attach 0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:20:45 compute-0 sudo[234118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhodjxbkvldsxbyzxxdruxcmksqrfvnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297245.2526808-918-1655887440236/AnsiballZ_command.py'
Dec 09 16:20:45 compute-0 sudo[234118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:45 compute-0 ceph-mon[75222]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:45 compute-0 python3.9[234120]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:45 compute-0 sudo[234118]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:46 compute-0 focused_raman[234087]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:20:46 compute-0 focused_raman[234087]: --> All data devices are unavailable
Dec 09 16:20:46 compute-0 systemd[1]: libpod-0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96.scope: Deactivated successfully.
Dec 09 16:20:46 compute-0 podman[234046]: 2025-12-09 16:20:46.047994999 +0000 UTC m=+0.664363825 container died 0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_raman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 16:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-452f0988ccf9bb3391e414a19862541f670233671326cf147ffc890961a0b281-merged.mount: Deactivated successfully.
Dec 09 16:20:46 compute-0 podman[234046]: 2025-12-09 16:20:46.089494118 +0000 UTC m=+0.705862924 container remove 0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 09 16:20:46 compute-0 systemd[1]: libpod-conmon-0097da02324ccc5e6f46e901ff9a9ed61c95f9d7b9c236b5efeaf3f53a16be96.scope: Deactivated successfully.
Dec 09 16:20:46 compute-0 sudo[233818]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:46 compute-0 sudo[234247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:20:46 compute-0 sudo[234247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:46 compute-0 sudo[234247]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:46 compute-0 sudo[234295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:20:46 compute-0 sudo[234295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:46 compute-0 sudo[234344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlhmvyhrokeozyfgqxngopcbhxcerzak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297245.9529386-918-206469360295247/AnsiballZ_command.py'
Dec 09 16:20:46 compute-0 sudo[234344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:46 compute-0 python3.9[234348]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:46 compute-0 sudo[234344]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:46 compute-0 podman[234365]: 2025-12-09 16:20:46.530508777 +0000 UTC m=+0.038118954 container create d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:20:46 compute-0 systemd[1]: Started libpod-conmon-d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa.scope.
Dec 09 16:20:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:20:46 compute-0 podman[234365]: 2025-12-09 16:20:46.606093504 +0000 UTC m=+0.113703661 container init d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yalow, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:20:46 compute-0 podman[234365]: 2025-12-09 16:20:46.512509325 +0000 UTC m=+0.020119512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:20:46 compute-0 podman[234365]: 2025-12-09 16:20:46.612607079 +0000 UTC m=+0.120217246 container start d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yalow, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:20:46 compute-0 podman[234365]: 2025-12-09 16:20:46.615863671 +0000 UTC m=+0.123473838 container attach d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:20:46 compute-0 relaxed_yalow[234420]: 167 167
Dec 09 16:20:46 compute-0 systemd[1]: libpod-d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa.scope: Deactivated successfully.
Dec 09 16:20:46 compute-0 podman[234365]: 2025-12-09 16:20:46.617784536 +0000 UTC m=+0.125394713 container died d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yalow, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e8942a62bfd9d30e2e5a2a9108f5ed4f3d16e5b4f3e92ba9fe92c61a1de7859-merged.mount: Deactivated successfully.
Dec 09 16:20:46 compute-0 podman[234365]: 2025-12-09 16:20:46.647562252 +0000 UTC m=+0.155172419 container remove d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yalow, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:20:46 compute-0 systemd[1]: libpod-conmon-d807af2ac14af136a2dd5d80dbfb903da32b6a5619e2a633fea5816c7c3b59fa.scope: Deactivated successfully.
Dec 09 16:20:46 compute-0 podman[234525]: 2025-12-09 16:20:46.808243446 +0000 UTC m=+0.036659003 container create b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:20:46 compute-0 sudo[234565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlhxouyjoltnwvvxquwcaqistsxwkey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297246.5843322-918-38259047906488/AnsiballZ_command.py'
Dec 09 16:20:46 compute-0 sudo[234565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:46 compute-0 systemd[1]: Started libpod-conmon-b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830.scope.
Dec 09 16:20:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c93f837a236c4e452d79acae65dfe16857be5e8a6f4684156cea465a44c2a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c93f837a236c4e452d79acae65dfe16857be5e8a6f4684156cea465a44c2a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c93f837a236c4e452d79acae65dfe16857be5e8a6f4684156cea465a44c2a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c93f837a236c4e452d79acae65dfe16857be5e8a6f4684156cea465a44c2a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:46 compute-0 podman[234525]: 2025-12-09 16:20:46.792645103 +0000 UTC m=+0.021060670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:20:46 compute-0 podman[234525]: 2025-12-09 16:20:46.892369086 +0000 UTC m=+0.120784653 container init b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:20:46 compute-0 podman[234525]: 2025-12-09 16:20:46.904778678 +0000 UTC m=+0.133194225 container start b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:20:46 compute-0 podman[234525]: 2025-12-09 16:20:46.908339279 +0000 UTC m=+0.136754836 container attach b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:20:47 compute-0 python3.9[234569]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:47 compute-0 sudo[234565]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]: {
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:     "0": [
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:         {
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "devices": [
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "/dev/loop3"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             ],
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_name": "ceph_lv0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_size": "21470642176",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "name": "ceph_lv0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "tags": {
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cluster_name": "ceph",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.crush_device_class": "",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.encrypted": "0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.objectstore": "bluestore",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osd_id": "0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.type": "block",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.vdo": "0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.with_tpm": "0"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             },
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "type": "block",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "vg_name": "ceph_vg0"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:         }
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:     ],
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:     "1": [
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:         {
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "devices": [
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "/dev/loop4"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             ],
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_name": "ceph_lv1",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_size": "21470642176",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "name": "ceph_lv1",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "tags": {
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cluster_name": "ceph",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.crush_device_class": "",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.encrypted": "0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.objectstore": "bluestore",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osd_id": "1",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.type": "block",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.vdo": "0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.with_tpm": "0"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             },
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "type": "block",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "vg_name": "ceph_vg1"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:         }
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:     ],
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:     "2": [
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:         {
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "devices": [
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "/dev/loop5"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             ],
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_name": "ceph_lv2",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_size": "21470642176",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "name": "ceph_lv2",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "tags": {
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.cluster_name": "ceph",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.crush_device_class": "",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.encrypted": "0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.objectstore": "bluestore",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osd_id": "2",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.type": "block",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.vdo": "0",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:                 "ceph.with_tpm": "0"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             },
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "type": "block",
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:             "vg_name": "ceph_vg2"
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:         }
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]:     ]
Dec 09 16:20:47 compute-0 pedantic_dirac[234570]: }
Dec 09 16:20:47 compute-0 systemd[1]: libpod-b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830.scope: Deactivated successfully.
Dec 09 16:20:47 compute-0 podman[234525]: 2025-12-09 16:20:47.243152061 +0000 UTC m=+0.471567608 container died b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-35c93f837a236c4e452d79acae65dfe16857be5e8a6f4684156cea465a44c2a5-merged.mount: Deactivated successfully.
Dec 09 16:20:47 compute-0 podman[234525]: 2025-12-09 16:20:47.285525285 +0000 UTC m=+0.513940832 container remove b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 09 16:20:47 compute-0 systemd[1]: libpod-conmon-b25947a1f919dfc0342a80b65b64de0bcbfb156c260d5b80ad8ed793122d8830.scope: Deactivated successfully.
Dec 09 16:20:47 compute-0 sudo[234295]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:47 compute-0 sudo[234692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:20:47 compute-0 sudo[234692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:47 compute-0 sudo[234692]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:47 compute-0 sudo[234741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:20:47 compute-0 sudo[234741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:47 compute-0 sudo[234790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoussvjnhjaqoauislbwrekyjmeestib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297247.184253-918-172234371580169/AnsiballZ_command.py'
Dec 09 16:20:47 compute-0 sudo[234790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:47 compute-0 python3.9[234794]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:47 compute-0 sudo[234790]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:47 compute-0 ceph-mon[75222]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:47 compute-0 podman[234809]: 2025-12-09 16:20:47.762119524 +0000 UTC m=+0.045533744 container create 26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 09 16:20:47 compute-0 systemd[1]: Started libpod-conmon-26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e.scope.
Dec 09 16:20:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:20:47 compute-0 podman[234809]: 2025-12-09 16:20:47.741968412 +0000 UTC m=+0.025382642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:20:47 compute-0 podman[234809]: 2025-12-09 16:20:47.83414459 +0000 UTC m=+0.117558790 container init 26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:20:47 compute-0 podman[234809]: 2025-12-09 16:20:47.841075577 +0000 UTC m=+0.124489777 container start 26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:20:47 compute-0 amazing_dhawan[234849]: 167 167
Dec 09 16:20:47 compute-0 podman[234809]: 2025-12-09 16:20:47.84575265 +0000 UTC m=+0.129166990 container attach 26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:20:47 compute-0 podman[234809]: 2025-12-09 16:20:47.846500821 +0000 UTC m=+0.129915021 container died 26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dhawan, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:20:47 compute-0 systemd[1]: libpod-26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e.scope: Deactivated successfully.
Dec 09 16:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-614933a45bf8f5c87ec9de8402684b8bfc303758a3ea319c2fe33ab38d960741-merged.mount: Deactivated successfully.
Dec 09 16:20:47 compute-0 podman[234809]: 2025-12-09 16:20:47.87987351 +0000 UTC m=+0.163287710 container remove 26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:20:47 compute-0 systemd[1]: libpod-conmon-26f16d47ca1ae5da3b4a5e6b8afcc62cc21d339208eaf3022004947520fe023e.scope: Deactivated successfully.
Dec 09 16:20:48 compute-0 podman[234961]: 2025-12-09 16:20:48.077694899 +0000 UTC m=+0.046931734 container create c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:20:48 compute-0 systemd[1]: Started libpod-conmon-c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9.scope.
Dec 09 16:20:48 compute-0 sudo[235014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glkteuoilxgfaxrknuovihwaktkhyzdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297247.861915-918-24924558713353/AnsiballZ_command.py'
Dec 09 16:20:48 compute-0 sudo[235014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:48 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b17b4d159ff93efd39bea0928d1b36c7f5804b56747db502dee3d5e0eb8b88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b17b4d159ff93efd39bea0928d1b36c7f5804b56747db502dee3d5e0eb8b88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b17b4d159ff93efd39bea0928d1b36c7f5804b56747db502dee3d5e0eb8b88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b17b4d159ff93efd39bea0928d1b36c7f5804b56747db502dee3d5e0eb8b88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:20:48 compute-0 podman[234961]: 2025-12-09 16:20:48.055521879 +0000 UTC m=+0.024758764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:20:48 compute-0 podman[234961]: 2025-12-09 16:20:48.153873184 +0000 UTC m=+0.123110049 container init c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:20:48 compute-0 podman[234961]: 2025-12-09 16:20:48.166548804 +0000 UTC m=+0.135785679 container start c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:20:48 compute-0 podman[234961]: 2025-12-09 16:20:48.172093461 +0000 UTC m=+0.141330306 container attach c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hopper, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 09 16:20:48 compute-0 podman[235017]: 2025-12-09 16:20:48.251686392 +0000 UTC m=+0.110221092 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:20:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:48 compute-0 python3.9[235020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:48 compute-0 sudo[235014]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:48 compute-0 sudo[235262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzffuaptavzxgxuqxbxboowosoeezehs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297248.4595735-918-46414242543764/AnsiballZ_command.py'
Dec 09 16:20:48 compute-0 sudo[235262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:48 compute-0 lvm[235277]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:20:48 compute-0 lvm[235275]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:20:48 compute-0 lvm[235277]: VG ceph_vg2 finished
Dec 09 16:20:48 compute-0 lvm[235275]: VG ceph_vg1 finished
Dec 09 16:20:48 compute-0 lvm[235274]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:20:48 compute-0 lvm[235274]: VG ceph_vg0 finished
Dec 09 16:20:48 compute-0 python3.9[235266]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 16:20:48 compute-0 dazzling_hopper[235016]: {}
Dec 09 16:20:48 compute-0 sudo[235262]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:48 compute-0 systemd[1]: libpod-c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9.scope: Deactivated successfully.
Dec 09 16:20:48 compute-0 podman[234961]: 2025-12-09 16:20:48.982076562 +0000 UTC m=+0.951313497 container died c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 16:20:48 compute-0 systemd[1]: libpod-c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9.scope: Consumed 1.325s CPU time.
Dec 09 16:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4b17b4d159ff93efd39bea0928d1b36c7f5804b56747db502dee3d5e0eb8b88-merged.mount: Deactivated successfully.
Dec 09 16:20:49 compute-0 podman[234961]: 2025-12-09 16:20:49.039532814 +0000 UTC m=+1.008769649 container remove c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hopper, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:20:49 compute-0 systemd[1]: libpod-conmon-c415f6f188ddcd2db34893807b706d04f8cace8d97c27c3d85b4f32744e92ee9.scope: Deactivated successfully.
Dec 09 16:20:49 compute-0 sudo[234741]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:20:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:20:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:20:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:20:49 compute-0 sudo[235318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:20:49 compute-0 sudo[235318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:20:49 compute-0 sudo[235318]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:49 compute-0 ceph-mon[75222]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:20:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:20:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:50 compute-0 sudo[235468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwxotliyzekgtuyxhpcstktrnafzjmbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297249.980069-997-51548162969910/AnsiballZ_file.py'
Dec 09 16:20:50 compute-0 sudo[235468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:50 compute-0 podman[235470]: 2025-12-09 16:20:50.449701635 +0000 UTC m=+0.089926956 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:20:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:50 compute-0 python3.9[235471]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:50 compute-0 sudo[235468]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:51 compute-0 sudo[235639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viuzdypobpcsozlveqhnhylczfedoltr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297250.7306852-997-112240695978627/AnsiballZ_file.py'
Dec 09 16:20:51 compute-0 sudo[235639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:51 compute-0 python3.9[235641]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:51 compute-0 sudo[235639]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:51 compute-0 ceph-mon[75222]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:51 compute-0 sudo[235791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcqnwmwwwladkcsxxrnopnezyrocpib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297251.4653156-997-70684239898374/AnsiballZ_file.py'
Dec 09 16:20:51 compute-0 sudo[235791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:51 compute-0 python3.9[235793]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:51 compute-0 sudo[235791]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:52 compute-0 sudo[235943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkbacoimhztprvqotkbonerpdchnatuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297252.182436-1019-70178196007972/AnsiballZ_file.py'
Dec 09 16:20:52 compute-0 sudo[235943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:52 compute-0 python3.9[235945]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:52 compute-0 sudo[235943]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:53 compute-0 sudo[236095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhgyvgbcrajwgjsiqrwcrcrkuwlbnuhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297252.8599508-1019-145481707861743/AnsiballZ_file.py'
Dec 09 16:20:53 compute-0 sudo[236095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:53 compute-0 python3.9[236097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:53 compute-0 sudo[236095]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:53 compute-0 ceph-mon[75222]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:53 compute-0 sudo[236247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtminqbiebbdowazgjbszeizvsrvycqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297253.5573149-1019-263091200638266/AnsiballZ_file.py'
Dec 09 16:20:53 compute-0 sudo[236247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:54 compute-0 python3.9[236249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:54 compute-0 sudo[236247]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:54 compute-0 sudo[236399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diiadeifnffppskcyqnrhryyufcpolib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297254.1931558-1019-144865255077545/AnsiballZ_file.py'
Dec 09 16:20:54 compute-0 sudo[236399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:54 compute-0 python3.9[236401]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:54 compute-0 sudo[236399]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:55 compute-0 sudo[236551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbtacvzxqlyuqwwkqoykcyetimgkczoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297254.8752234-1019-134355447221526/AnsiballZ_file.py'
Dec 09 16:20:55 compute-0 sudo[236551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:55 compute-0 python3.9[236553]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:55 compute-0 sudo[236551]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:20:55 compute-0 ceph-mon[75222]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:55 compute-0 sudo[236703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymfuneivcwwyoztubwwnyjzwlgyhxwxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297255.5591917-1019-180937182064729/AnsiballZ_file.py'
Dec 09 16:20:55 compute-0 sudo[236703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:56 compute-0 python3.9[236705]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:56 compute-0 sudo[236703]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:20:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:20:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:20:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:20:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:20:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:20:56 compute-0 sudo[236855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoqwptgttvgpjlhafusoowquvxveerxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297256.2659807-1019-22739621166558/AnsiballZ_file.py'
Dec 09 16:20:56 compute-0 sudo[236855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:20:56 compute-0 python3.9[236857]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:20:56 compute-0 sudo[236855]: pam_unix(sudo:session): session closed for user root
Dec 09 16:20:57 compute-0 ceph-mon[75222]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:20:59 compute-0 ceph-mon[75222]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:01 compute-0 ceph-mon[75222]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:02 compute-0 sudo[237007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwccqbzxlmftplbdwvafrssmbigthzaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297261.966889-1208-130326229616486/AnsiballZ_getent.py'
Dec 09 16:21:02 compute-0 sudo[237007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:02 compute-0 python3.9[237009]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 09 16:21:02 compute-0 sudo[237007]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:02 compute-0 ceph-mon[75222]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:03 compute-0 sudo[237160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knbynkpjpzxnyrhznrugktzwpvtiojwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297262.8386004-1216-217781744852073/AnsiballZ_group.py'
Dec 09 16:21:03 compute-0 sudo[237160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:03 compute-0 python3.9[237162]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 16:21:03 compute-0 groupadd[237163]: group added to /etc/group: name=nova, GID=42436
Dec 09 16:21:03 compute-0 groupadd[237163]: group added to /etc/gshadow: name=nova
Dec 09 16:21:03 compute-0 groupadd[237163]: new group: name=nova, GID=42436
Dec 09 16:21:03 compute-0 sudo[237160]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:04 compute-0 sudo[237335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujagrvkfzvbykbsattyvrdaccgsjswis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297263.7817924-1224-55932075463043/AnsiballZ_user.py'
Dec 09 16:21:04 compute-0 podman[237292]: 2025-12-09 16:21:04.301588618 +0000 UTC m=+0.046434280 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd)
Dec 09 16:21:04 compute-0 sudo[237335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:04 compute-0 python3.9[237341]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 16:21:04 compute-0 useradd[237344]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 09 16:21:04 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:21:04 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:21:04 compute-0 useradd[237344]: add 'nova' to group 'libvirt'
Dec 09 16:21:04 compute-0 useradd[237344]: add 'nova' to shadow group 'libvirt'
Dec 09 16:21:04 compute-0 sudo[237335]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:05 compute-0 ceph-mon[75222]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:05 compute-0 sshd-session[237376]: Accepted publickey for zuul from 192.168.122.30 port 41544 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:21:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:05 compute-0 systemd-logind[786]: New session 51 of user zuul.
Dec 09 16:21:05 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec 09 16:21:05 compute-0 sshd-session[237376]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:21:05 compute-0 sshd-session[237379]: Received disconnect from 192.168.122.30 port 41544:11: disconnected by user
Dec 09 16:21:05 compute-0 sshd-session[237379]: Disconnected from user zuul 192.168.122.30 port 41544
Dec 09 16:21:05 compute-0 sshd-session[237376]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:21:05 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec 09 16:21:05 compute-0 systemd-logind[786]: Session 51 logged out. Waiting for processes to exit.
Dec 09 16:21:05 compute-0 systemd-logind[786]: Removed session 51.
Dec 09 16:21:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:06 compute-0 python3.9[237529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:06 compute-0 python3.9[237650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765297265.917745-1249-1875289359628/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:21:07 compute-0 ceph-mon[75222]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:07 compute-0 python3.9[237800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:08 compute-0 python3.9[237876]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:21:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:08 compute-0 python3.9[238026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:09 compute-0 sshd-session[238028]: Invalid user odoo from 146.190.31.45 port 39032
Dec 09 16:21:09 compute-0 sshd-session[238028]: Connection closed by invalid user odoo 146.190.31.45 port 39032 [preauth]
Dec 09 16:21:09 compute-0 python3.9[238149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765297268.256279-1249-11800023978435/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:21:09 compute-0 ceph-mon[75222]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:09 compute-0 python3.9[238299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:10 compute-0 python3.9[238420]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765297269.490059-1249-43544531033680/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:21:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:11 compute-0 python3.9[238570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:11 compute-0 ceph-mon[75222]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:11 compute-0 python3.9[238691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765297270.699071-1249-275718682232852/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:21:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:12 compute-0 python3.9[238841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:12 compute-0 python3.9[238962]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765297271.8996122-1249-123680055465237/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:21:12 compute-0 ceph-mon[75222]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:13 compute-0 sudo[239112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roagktbwjtydcqdleovgquztdlkgbxox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297273.050337-1332-23795294697882/AnsiballZ_file.py'
Dec 09 16:21:13 compute-0 sudo[239112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:13 compute-0 python3.9[239114]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:21:13 compute-0 sudo[239112]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:14 compute-0 sudo[239264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgeuwbnhlqljgxtfczhlllvrmlhwfxti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297273.719725-1340-212909541176715/AnsiballZ_copy.py'
Dec 09 16:21:14 compute-0 sudo[239264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:14 compute-0 python3.9[239266]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:21:14 compute-0 sudo[239264]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:14 compute-0 sudo[239416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvkqffvghxxhhnwscgigfnkqaixrruxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297274.5393364-1348-16558446915543/AnsiballZ_stat.py'
Dec 09 16:21:14 compute-0 sudo[239416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:15 compute-0 python3.9[239418]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:21:15 compute-0 sudo[239416]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:15 compute-0 sudo[239568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njadgkazenhsxbqflacajzzmxwxkczaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297275.1922786-1356-6202835526410/AnsiballZ_stat.py'
Dec 09 16:21:15 compute-0 sudo[239568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:15 compute-0 ceph-mon[75222]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:15 compute-0 python3.9[239570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:15 compute-0 sudo[239568]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:16 compute-0 sudo[239691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uixlcfxplvuuwfysygsowdbhnaopuupd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297275.1922786-1356-6202835526410/AnsiballZ_copy.py'
Dec 09 16:21:16 compute-0 sudo[239691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:16 compute-0 python3.9[239693]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765297275.1922786-1356-6202835526410/.source _original_basename=.l90_adet follow=False checksum=c74c754ebbeabcdfee9be30d387318af5592ba99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 09 16:21:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:16 compute-0 sudo[239691]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:17 compute-0 python3.9[239845]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:21:17 compute-0 ceph-mon[75222]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:21:17.838 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:21:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:21:17.840 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:21:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:21:17.840 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:21:17 compute-0 python3.9[239997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:18 compute-0 python3.9[240118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765297277.4206073-1382-164206046339344/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=209f20105d13c02e6cb251483bae1beb11a1258f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:21:18 compute-0 podman[240119]: 2025-12-09 16:21:18.58502125 +0000 UTC m=+0.079950852 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 09 16:21:19 compute-0 python3.9[240294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 16:21:19 compute-0 ceph-mon[75222]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:19 compute-0 python3.9[240415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765297278.6905873-1397-206214195683315/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=0333d3a3f5c3a0526b0ebe430250032166710e8a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 16:21:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:20 compute-0 sudo[240565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bahfhyducxqghbsdgtfhpkvrzskzwxds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297280.0952506-1414-170683661004438/AnsiballZ_container_config_data.py'
Dec 09 16:21:20 compute-0 sudo[240565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:20 compute-0 python3.9[240567]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 09 16:21:20 compute-0 sudo[240565]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:20 compute-0 podman[240568]: 2025-12-09 16:21:20.613566338 +0000 UTC m=+0.061779496 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 09 16:21:21 compute-0 sudo[240736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvrplmyfscplcgbclxlduyqonxofzqzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297280.8094664-1423-217453715856035/AnsiballZ_container_config_hash.py'
Dec 09 16:21:21 compute-0 sudo[240736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:21 compute-0 ceph-mon[75222]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:21 compute-0 python3.9[240738]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 16:21:21 compute-0 sudo[240736]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:22 compute-0 sudo[240888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjrmwjuebetcterictjxrkmmdvtypjga ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765297281.6484444-1433-79857362061410/AnsiballZ_edpm_container_manage.py'
Dec 09 16:21:22 compute-0 sudo[240888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:22 compute-0 python3[240890]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 16:21:23 compute-0 ceph-mon[75222]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:25 compute-0 ceph-mon[75222]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:21:25
Dec 09 16:21:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:21:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:21:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.meta']
Dec 09 16:21:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:21:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:21:27 compute-0 ceph-mon[75222]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:30 compute-0 ceph-mon[75222]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:31 compute-0 ceph-mon[75222]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:34 compute-0 ceph-mon[75222]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:34 compute-0 podman[240902]: 2025-12-09 16:21:34.331983037 +0000 UTC m=+11.904234341 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 09 16:21:34 compute-0 podman[240983]: 2025-12-09 16:21:34.501662597 +0000 UTC m=+0.051379160 container create bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 16:21:34 compute-0 podman[240983]: 2025-12-09 16:21:34.473685652 +0000 UTC m=+0.023402245 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 09 16:21:34 compute-0 python3[240890]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 09 16:21:34 compute-0 sudo[240888]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:34 compute-0 podman[241007]: 2025-12-09 16:21:34.621771459 +0000 UTC m=+0.066098778 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 16:21:35 compute-0 sudo[241189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmrszyrofloypzqcutvxkjtfsidlcskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297294.7795036-1441-235487120308295/AnsiballZ_stat.py'
Dec 09 16:21:35 compute-0 sudo[241189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:35 compute-0 python3.9[241191]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:21:35 compute-0 ceph-mon[75222]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:35 compute-0 sudo[241189]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:35 compute-0 sudo[241343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nglwcrsloturdvnnbqhqlxmsolflarhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297295.6826355-1453-205647892376883/AnsiballZ_container_config_data.py'
Dec 09 16:21:35 compute-0 sudo[241343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:36 compute-0 python3.9[241345]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 09 16:21:36 compute-0 sudo[241343]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:21:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:21:36 compute-0 sudo[241495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdeikuumzepznkxjcwpeqdelegrasgee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297296.4363625-1462-133032283162673/AnsiballZ_container_config_hash.py'
Dec 09 16:21:36 compute-0 sudo[241495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:36 compute-0 python3.9[241497]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 16:21:36 compute-0 sudo[241495]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:37 compute-0 ceph-mon[75222]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:37 compute-0 sudo[241647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtxblhifiuetpcixchcrgzprudwybumi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765297297.1785853-1472-243164381644417/AnsiballZ_edpm_container_manage.py'
Dec 09 16:21:37 compute-0 sudo[241647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:37 compute-0 python3[241649]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 16:21:37 compute-0 podman[241685]: 2025-12-09 16:21:37.956146655 +0000 UTC m=+0.071935425 container create 9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:21:37 compute-0 podman[241685]: 2025-12-09 16:21:37.914357207 +0000 UTC m=+0.030146027 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 09 16:21:37 compute-0 python3[241649]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b kolla_start
Dec 09 16:21:38 compute-0 sudo[241647]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:38 compute-0 sudo[241877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbsuuvpkjcgatliwwhdgfdqplvynkpcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297298.362938-1480-96739036914615/AnsiballZ_stat.py'
Dec 09 16:21:38 compute-0 sudo[241877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:39 compute-0 ceph-mon[75222]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:39 compute-0 python3.9[241879]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:21:39 compute-0 sudo[241877]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:40 compute-0 sudo[242031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjejcirzrwcojdclkgjcmubfernubvej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297299.989843-1489-174609165589244/AnsiballZ_file.py'
Dec 09 16:21:40 compute-0 sudo[242031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:40 compute-0 python3.9[242033]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:21:40 compute-0 sudo[242031]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:40 compute-0 sudo[242182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgrpauvozoukakeaqsrggolggwuefcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297300.5323107-1489-63285789086841/AnsiballZ_copy.py'
Dec 09 16:21:40 compute-0 sudo[242182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:41 compute-0 python3.9[242184]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765297300.5323107-1489-63285789086841/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 16:21:41 compute-0 sudo[242182]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:41 compute-0 sudo[242258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yivzioaieiyobhhnbeltvvmnbjramiol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297300.5323107-1489-63285789086841/AnsiballZ_systemd.py'
Dec 09 16:21:41 compute-0 sudo[242258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:41 compute-0 ceph-mon[75222]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:41 compute-0 python3.9[242260]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 16:21:41 compute-0 systemd[1]: Reloading.
Dec 09 16:21:41 compute-0 systemd-sysv-generator[242292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:21:41 compute-0 systemd-rc-local-generator[242287]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:21:42 compute-0 sudo[242258]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:42 compute-0 sudo[242370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppaeuchthrjgjdtdftzosxwkgdandauw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297300.5323107-1489-63285789086841/AnsiballZ_systemd.py'
Dec 09 16:21:42 compute-0 sudo[242370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:42 compute-0 python3.9[242372]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 16:21:42 compute-0 systemd[1]: Reloading.
Dec 09 16:21:43 compute-0 systemd-rc-local-generator[242399]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:21:43 compute-0 systemd-sysv-generator[242402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:21:43 compute-0 systemd[1]: Starting nova_compute container...
Dec 09 16:21:43 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:43 compute-0 podman[242412]: 2025-12-09 16:21:43.5856293 +0000 UTC m=+0.235740878 container init 9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202)
Dec 09 16:21:43 compute-0 podman[242412]: 2025-12-09 16:21:43.596021415 +0000 UTC m=+0.246132983 container start 9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:21:43 compute-0 nova_compute[242427]: + sudo -E kolla_set_configs
Dec 09 16:21:43 compute-0 podman[242412]: nova_compute
Dec 09 16:21:43 compute-0 systemd[1]: Started nova_compute container.
Dec 09 16:21:43 compute-0 sudo[242370]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Validating config file
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying service configuration files
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Deleting /etc/ceph
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Creating directory /etc/ceph
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/ceph
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Writing out command to execute
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 09 16:21:43 compute-0 nova_compute[242427]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 09 16:21:43 compute-0 nova_compute[242427]: ++ cat /run_command
Dec 09 16:21:43 compute-0 nova_compute[242427]: + CMD=nova-compute
Dec 09 16:21:43 compute-0 nova_compute[242427]: + ARGS=
Dec 09 16:21:43 compute-0 nova_compute[242427]: + sudo kolla_copy_cacerts
Dec 09 16:21:43 compute-0 ceph-mon[75222]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:43 compute-0 nova_compute[242427]: + [[ ! -n '' ]]
Dec 09 16:21:43 compute-0 nova_compute[242427]: + . kolla_extend_start
Dec 09 16:21:43 compute-0 nova_compute[242427]: Running command: 'nova-compute'
Dec 09 16:21:43 compute-0 nova_compute[242427]: + echo 'Running command: '\''nova-compute'\'''
Dec 09 16:21:43 compute-0 nova_compute[242427]: + umask 0022
Dec 09 16:21:43 compute-0 nova_compute[242427]: + exec nova-compute
Dec 09 16:21:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:44 compute-0 python3.9[242589]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:21:45 compute-0 python3.9[242739]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:21:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:45 compute-0 ceph-mon[75222]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:45 compute-0 nova_compute[242427]: 2025-12-09 16:21:45.769 242431 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 16:21:45 compute-0 nova_compute[242427]: 2025-12-09 16:21:45.770 242431 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 16:21:45 compute-0 nova_compute[242427]: 2025-12-09 16:21:45.770 242431 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 16:21:45 compute-0 nova_compute[242427]: 2025-12-09 16:21:45.770 242431 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 09 16:21:45 compute-0 nova_compute[242427]: 2025-12-09 16:21:45.913 242431 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:21:45 compute-0 nova_compute[242427]: 2025-12-09 16:21:45.925 242431 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:21:45 compute-0 nova_compute[242427]: 2025-12-09 16:21:45.926 242431 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 09 16:21:46 compute-0 python3.9[242893]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 16:21:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.528 242431 INFO nova.virt.driver [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.646 242431 INFO nova.compute.provider_config [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.667 242431 DEBUG oslo_concurrency.lockutils [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.667 242431 DEBUG oslo_concurrency.lockutils [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.668 242431 DEBUG oslo_concurrency.lockutils [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.668 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.668 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.668 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.669 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.669 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.669 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.670 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.670 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.670 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.670 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.671 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.671 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.671 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.671 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.672 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.672 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.672 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.672 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.673 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.673 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.673 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.673 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.674 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.674 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.674 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.674 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.675 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.675 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.675 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.676 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.676 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.676 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.676 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.677 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.677 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.677 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.677 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.677 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.678 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.678 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.678 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.679 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.679 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.679 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.679 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.680 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.680 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.680 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.680 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.681 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.681 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.682 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.682 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.682 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.682 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.683 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.683 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.683 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.683 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.684 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.684 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.685 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.686 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.686 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.686 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.686 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.686 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.687 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.687 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.687 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.687 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.687 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.687 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.688 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.688 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.688 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.688 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.688 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.689 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.689 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.689 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.689 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.689 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.689 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.690 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.690 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.690 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.690 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.690 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.690 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.690 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.691 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.691 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.691 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.691 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.691 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.691 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.692 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.692 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.692 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.692 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.692 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.692 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.693 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.693 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.693 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.693 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.693 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.693 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.694 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.694 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.694 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.694 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.694 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.694 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.694 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.695 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.695 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.695 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.695 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.695 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.695 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.696 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.696 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.696 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.696 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.696 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.696 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.696 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.697 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.697 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.697 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.697 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.697 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.697 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.697 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.698 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.698 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.698 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.698 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.698 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.698 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.698 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.699 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.699 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.699 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.699 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.699 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.699 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.700 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.700 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.700 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.700 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.701 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.701 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.701 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.701 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.701 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.702 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.702 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.702 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.702 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.702 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.702 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.703 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.703 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.703 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.703 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.703 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.704 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.704 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.704 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.704 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.704 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.705 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.705 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.705 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.705 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.705 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.705 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.706 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.706 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.706 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.706 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.706 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.706 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.706 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.707 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.707 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.707 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.707 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.707 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.707 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.707 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.708 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.708 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.708 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.708 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.708 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.708 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.709 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.709 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.709 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.709 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.710 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.710 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.710 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.710 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.710 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.710 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.710 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.711 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.711 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.711 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.711 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.711 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.711 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.711 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.712 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.712 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.712 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.712 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.712 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.712 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.712 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.713 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.713 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.713 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.713 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.713 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.713 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.713 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.714 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.714 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.714 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.714 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.714 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.715 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.715 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.715 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.715 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.715 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.715 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.716 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.716 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.716 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.716 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.716 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.716 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.717 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.717 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.717 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.717 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.717 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.717 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.717 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.718 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.718 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.718 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.718 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.718 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.718 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.719 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.719 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.719 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.719 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.719 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.719 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.720 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.720 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.720 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.720 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.720 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.720 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.721 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.721 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.721 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.721 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.721 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.721 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.721 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.722 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.722 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.722 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.722 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.722 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.722 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.722 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.723 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.723 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.723 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.723 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.723 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.723 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.723 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.724 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.724 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.724 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.724 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.724 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.724 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.725 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.725 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.725 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.725 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.725 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.725 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.726 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.726 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.726 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.726 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.726 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.726 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.726 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.726 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.727 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.727 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.727 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.727 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.727 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.727 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.727 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.728 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.728 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.728 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.728 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.728 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.728 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.729 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.729 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.729 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.729 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.729 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.729 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.730 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.730 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.730 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.730 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.730 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.731 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.731 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.731 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.731 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.731 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.731 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.731 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.732 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.732 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.732 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.732 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.732 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.732 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.732 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.733 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.733 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.733 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.733 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.733 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.734 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.734 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.734 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.734 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.734 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.734 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.735 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.735 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.735 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.735 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.735 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.735 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.735 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.736 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.736 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.736 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.736 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.736 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.736 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.737 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.737 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.737 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.737 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.737 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.738 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.738 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.738 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.738 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.738 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.738 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.738 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.739 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.739 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.739 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.739 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.740 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.740 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.740 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.740 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.740 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.740 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.741 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.741 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.741 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.741 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.741 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.741 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.742 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.742 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.743 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.743 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.744 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.744 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.744 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.744 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.744 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.745 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.745 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.745 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.745 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.745 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.746 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.746 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.746 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.746 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.746 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.747 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.747 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.747 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.747 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.747 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.748 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.748 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.748 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.748 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.748 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.749 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.749 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.749 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.749 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.749 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.749 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.750 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.750 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.750 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.750 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.751 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.751 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.751 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.751 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.751 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.751 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.752 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.752 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.752 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.752 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.752 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.753 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.753 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.753 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.753 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.753 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.754 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.754 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.754 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.754 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.754 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.755 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.755 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.755 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.755 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.755 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.755 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.756 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.756 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.756 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.756 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.756 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.757 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.757 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.757 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.757 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.757 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.758 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.758 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.758 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.758 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.758 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.758 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.759 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.759 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.759 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.759 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.760 242431 WARNING oslo_config.cfg [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 09 16:21:46 compute-0 nova_compute[242427]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 09 16:21:46 compute-0 nova_compute[242427]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 09 16:21:46 compute-0 nova_compute[242427]: and ``live_migration_inbound_addr`` respectively.
Dec 09 16:21:46 compute-0 nova_compute[242427]: ).  Its value may be silently ignored in the future.
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.760 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.760 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.760 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.760 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.761 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.761 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.761 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.761 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.761 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.761 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.762 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.762 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.762 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.762 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.762 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.762 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.763 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.763 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.763 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rbd_secret_uuid        = 67f67f44-54fc-54ea-8df0-10931b6ecdaf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.763 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.763 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.763 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.764 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.764 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.764 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.764 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.764 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.764 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.765 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.765 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.765 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.765 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.765 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.766 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.766 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.766 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.766 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.766 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.766 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.767 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.767 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.767 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.767 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.767 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.767 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.768 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.768 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.768 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.768 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.768 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.768 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.768 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.769 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.769 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.769 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.769 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.769 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.769 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.770 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.770 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.770 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.770 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.770 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.770 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.770 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.771 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.771 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.771 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.771 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.771 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.771 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.771 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.772 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.772 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.772 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.772 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.772 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.773 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.773 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.773 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.773 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.773 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.774 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.774 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.774 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.774 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.774 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.774 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.775 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.775 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.775 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.775 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.775 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.776 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.776 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.776 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.776 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.776 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.776 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.777 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.777 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.777 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.777 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.777 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.777 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.777 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.778 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.778 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.778 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.778 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.778 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.778 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.778 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.779 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.779 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.779 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.779 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.779 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.779 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.780 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.780 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.780 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.780 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.780 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.781 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.781 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.781 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.781 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.781 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.781 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.782 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.782 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.782 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.782 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.782 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.782 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.783 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.783 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.783 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.783 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.783 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.784 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.784 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.784 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.784 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.784 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.784 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.785 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.785 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.785 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.785 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.785 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.785 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.786 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.786 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.786 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.786 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.786 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.786 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.787 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.787 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.787 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.787 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.787 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.788 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.788 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.788 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.788 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.788 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.788 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.789 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.789 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.789 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.789 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.789 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.790 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.790 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.790 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.790 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.790 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.791 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.791 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.791 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.791 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.791 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.791 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.792 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.792 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.792 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.792 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.792 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.793 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.793 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.793 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.793 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.793 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.793 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.794 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.794 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.794 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.794 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.795 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.795 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.795 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.795 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.795 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.795 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.796 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.796 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.796 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.796 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.796 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.797 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.797 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.797 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.797 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.797 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.798 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.798 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.798 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.798 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.798 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.799 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.799 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.799 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.799 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.799 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.799 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.799 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.800 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.800 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.800 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.800 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.800 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.800 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.800 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.801 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.801 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.801 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.801 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.801 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.801 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.801 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.802 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.802 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.802 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.802 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.802 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.802 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.803 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.803 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.803 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.803 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.803 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.803 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.804 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.804 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.804 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.804 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.804 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.804 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.804 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.805 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.805 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.805 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.805 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.805 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.805 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.805 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.806 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.806 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.806 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.806 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.806 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.806 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.806 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.807 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.807 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.807 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.807 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.807 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.807 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.808 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.808 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.808 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.808 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.808 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.809 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.809 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.809 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.809 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.809 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.810 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.810 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.810 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.810 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.810 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.811 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.811 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.811 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.811 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.811 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.811 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.812 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.812 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.812 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.812 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.812 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.812 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.813 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.813 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.813 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.813 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.813 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.813 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.813 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.814 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.814 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.814 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.814 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.814 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.814 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.814 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.815 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.815 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.815 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.815 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.815 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.815 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.816 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.816 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.816 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.816 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.816 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.816 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.816 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.817 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.817 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.817 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.817 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.817 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.817 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.818 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.818 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.818 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.818 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.818 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.818 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.818 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.819 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.819 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.819 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.819 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.819 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.819 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.819 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.820 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.820 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.820 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.820 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.820 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.820 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.821 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.821 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.821 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.821 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.821 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.821 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.822 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.822 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.822 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.822 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.822 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.822 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.822 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.823 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.823 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.823 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.823 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.823 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.823 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.823 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.824 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.824 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.824 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.824 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.824 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.824 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.825 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.825 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.825 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.825 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.825 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.825 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.826 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.826 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.826 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.826 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.826 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.827 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.827 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.827 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.827 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.827 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.827 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.828 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.828 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.828 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.828 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.828 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.828 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.828 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.829 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.829 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.829 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.829 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.829 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.830 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.830 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.830 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.830 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.830 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.830 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.831 242431 DEBUG oslo_service.service [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.832 242431 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.849 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.850 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.851 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.851 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 09 16:21:46 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 09 16:21:46 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.911 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3f6d102580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.913 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3f6d102580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.914 242431 INFO nova.virt.libvirt.driver [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Connection event '1' reason 'None'
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.935 242431 WARNING nova.virt.libvirt.driver [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 09 16:21:46 compute-0 nova_compute[242427]: 2025-12-09 16:21:46.936 242431 DEBUG nova.virt.libvirt.volume.mount [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 09 16:21:46 compute-0 sudo[243087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sitbdyyxsjownnpuoupxdrpgwqxevayb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297306.4600759-1549-195659348877500/AnsiballZ_podman_container.py'
Dec 09 16:21:46 compute-0 sudo[243087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:47 compute-0 python3.9[243095]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 09 16:21:47 compute-0 sudo[243087]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:47 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:21:47 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:21:47 compute-0 ceph-mon[75222]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:47 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.758 242431 INFO nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Libvirt host capabilities <capabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]: 
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <host>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <uuid>e88682e8-711b-4c73-9c89-1e4f2bbcc348</uuid>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <arch>x86_64</arch>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model>EPYC-Rome-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <vendor>AMD</vendor>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <microcode version='16777317'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <signature family='23' model='49' stepping='0'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='x2apic'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='tsc-deadline'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='osxsave'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='hypervisor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='tsc_adjust'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='spec-ctrl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='stibp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='arch-capabilities'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='cmp_legacy'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='topoext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='virt-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='lbrv'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='tsc-scale'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='vmcb-clean'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='pause-filter'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='pfthreshold'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='svme-addr-chk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='rdctl-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='skip-l1dfl-vmentry'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='mds-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature name='pschange-mc-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <pages unit='KiB' size='4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <pages unit='KiB' size='2048'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <pages unit='KiB' size='1048576'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <power_management>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <suspend_mem/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </power_management>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <iommu support='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <migration_features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <live/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <uri_transports>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <uri_transport>tcp</uri_transport>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <uri_transport>rdma</uri_transport>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </uri_transports>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </migration_features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <topology>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <cells num='1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <cell id='0'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:           <memory unit='KiB'>7864300</memory>
Dec 09 16:21:47 compute-0 nova_compute[242427]:           <pages unit='KiB' size='4'>1966075</pages>
Dec 09 16:21:47 compute-0 nova_compute[242427]:           <pages unit='KiB' size='2048'>0</pages>
Dec 09 16:21:47 compute-0 nova_compute[242427]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 09 16:21:47 compute-0 nova_compute[242427]:           <distances>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <sibling id='0' value='10'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:           </distances>
Dec 09 16:21:47 compute-0 nova_compute[242427]:           <cpus num='8'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:           </cpus>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         </cell>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </cells>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </topology>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <cache>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </cache>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <secmodel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model>selinux</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <doi>0</doi>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </secmodel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <secmodel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model>dac</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <doi>0</doi>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </secmodel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </host>
Dec 09 16:21:47 compute-0 nova_compute[242427]: 
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <guest>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <os_type>hvm</os_type>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <arch name='i686'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <wordsize>32</wordsize>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <domain type='qemu'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <domain type='kvm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </arch>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <pae/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <nonpae/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <acpi default='on' toggle='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <apic default='on' toggle='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <cpuselection/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <deviceboot/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <disksnapshot default='on' toggle='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <externalSnapshot/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </guest>
Dec 09 16:21:47 compute-0 nova_compute[242427]: 
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <guest>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <os_type>hvm</os_type>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <arch name='x86_64'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <wordsize>64</wordsize>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <domain type='qemu'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <domain type='kvm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </arch>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <acpi default='on' toggle='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <apic default='on' toggle='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <cpuselection/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <deviceboot/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <disksnapshot default='on' toggle='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <externalSnapshot/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </guest>
Dec 09 16:21:47 compute-0 nova_compute[242427]: 
Dec 09 16:21:47 compute-0 nova_compute[242427]: </capabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]: 
Dec 09 16:21:47 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.765 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 09 16:21:47 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.793 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 09 16:21:47 compute-0 nova_compute[242427]: <domainCapabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <domain>kvm</domain>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <arch>i686</arch>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <vcpu max='240'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <iothreads supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <os supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <enum name='firmware'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <loader supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>rom</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pflash</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='readonly'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>yes</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>no</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='secure'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>no</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </loader>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </os>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='host-passthrough' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='hostPassthroughMigratable'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>on</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>off</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='maximum' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='maximumMigratable'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>on</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>off</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='host-model' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <vendor>AMD</vendor>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='x2apic'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='hypervisor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='stibp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='overflow-recov'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='succor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='lbrv'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc-scale'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='flushbyasid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='pause-filter'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='pfthreshold'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='disable' name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='custom' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Dhyana-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Genoa'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='auto-ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='auto-ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-128'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-256'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-512'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v6'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v7'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='KnightsMill'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512er'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512pf'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='KnightsMill-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512er'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512pf'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G4-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tbm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G5-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tbm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SierraForest'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cmpccxadd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SierraForest-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cmpccxadd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='athlon'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='athlon-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='core2duo'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='core2duo-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='coreduo'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='coreduo-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='n270'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='n270-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='phenom'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='phenom-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <memoryBacking supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <enum name='sourceType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>file</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>anonymous</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>memfd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </memoryBacking>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <devices>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <disk supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='diskDevice'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>disk</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>cdrom</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>floppy</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>lun</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='bus'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>ide</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>fdc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>scsi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>sata</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-non-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </disk>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <graphics supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vnc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>egl-headless</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dbus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </graphics>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <video supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='modelType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vga</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>cirrus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>none</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>bochs</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>ramfb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </video>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <hostdev supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='mode'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>subsystem</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='startupPolicy'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>default</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>mandatory</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>requisite</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>optional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='subsysType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pci</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>scsi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='capsType'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='pciBackend'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </hostdev>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <rng supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-non-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>random</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>egd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>builtin</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </rng>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <filesystem supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='driverType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>path</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>handle</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtiofs</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </filesystem>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <tpm supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tpm-tis</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tpm-crb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>emulator</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>external</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendVersion'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>2.0</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </tpm>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <redirdev supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='bus'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </redirdev>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <channel supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pty</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>unix</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </channel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <crypto supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>qemu</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>builtin</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </crypto>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <interface supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>default</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>passt</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </interface>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <panic supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>isa</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>hyperv</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </panic>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <console supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>null</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pty</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dev</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>file</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pipe</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>stdio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>udp</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tcp</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>unix</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>qemu-vdagent</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dbus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </console>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </devices>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <gic supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <vmcoreinfo supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <genid supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <backingStoreInput supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <backup supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <async-teardown supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <ps2 supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <sev supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <sgx supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <hyperv supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='features'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>relaxed</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vapic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>spinlocks</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vpindex</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>runtime</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>synic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>stimer</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>reset</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vendor_id</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>frequencies</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>reenlightenment</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tlbflush</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>ipi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>avic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>emsr_bitmap</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>xmm_input</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <defaults>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <spinlocks>4095</spinlocks>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <stimer_direct>on</stimer_direct>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </defaults>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </hyperv>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <launchSecurity supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='sectype'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tdx</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </launchSecurity>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </features>
Dec 09 16:21:47 compute-0 nova_compute[242427]: </domainCapabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 16:21:47 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.801 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 09 16:21:47 compute-0 nova_compute[242427]: <domainCapabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <domain>kvm</domain>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <arch>i686</arch>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <vcpu max='4096'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <iothreads supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <os supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <enum name='firmware'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <loader supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>rom</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pflash</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='readonly'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>yes</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>no</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='secure'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>no</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </loader>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </os>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='host-passthrough' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='hostPassthroughMigratable'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>on</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>off</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='maximum' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='maximumMigratable'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>on</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>off</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='host-model' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <vendor>AMD</vendor>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='x2apic'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='hypervisor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='stibp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='overflow-recov'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='succor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='lbrv'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc-scale'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='flushbyasid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='pause-filter'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='pfthreshold'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='disable' name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='custom' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Dhyana-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Genoa'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='auto-ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='auto-ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-128'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-256'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-512'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 sudo[243282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utrvomqblgguekxvheiawoyvgxkhyyko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297307.5849538-1557-265993222803801/AnsiballZ_systemd.py'
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 sudo[243282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v6'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v7'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='KnightsMill'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512er'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512pf'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='KnightsMill-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512er'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512pf'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G4-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tbm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G5-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tbm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SierraForest'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cmpccxadd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SierraForest-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cmpccxadd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='athlon'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='athlon-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='core2duo'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='core2duo-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='coreduo'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='coreduo-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='n270'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='n270-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='phenom'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='phenom-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <memoryBacking supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <enum name='sourceType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>file</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>anonymous</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>memfd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </memoryBacking>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <devices>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <disk supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='diskDevice'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>disk</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>cdrom</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>floppy</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>lun</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='bus'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>fdc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>scsi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>sata</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-non-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </disk>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <graphics supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vnc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>egl-headless</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dbus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </graphics>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <video supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='modelType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vga</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>cirrus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>none</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>bochs</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>ramfb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </video>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <hostdev supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='mode'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>subsystem</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='startupPolicy'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>default</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>mandatory</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>requisite</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>optional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='subsysType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pci</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>scsi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='capsType'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='pciBackend'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </hostdev>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <rng supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-non-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>random</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>egd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>builtin</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </rng>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <filesystem supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='driverType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>path</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>handle</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtiofs</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </filesystem>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <tpm supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tpm-tis</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tpm-crb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>emulator</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>external</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendVersion'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>2.0</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </tpm>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <redirdev supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='bus'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </redirdev>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <channel supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pty</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>unix</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </channel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <crypto supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>qemu</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>builtin</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </crypto>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <interface supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>default</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>passt</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </interface>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <panic supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>isa</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>hyperv</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </panic>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <console supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>null</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pty</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dev</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>file</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pipe</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>stdio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>udp</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tcp</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>unix</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>qemu-vdagent</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dbus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </console>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </devices>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <gic supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <vmcoreinfo supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <genid supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <backingStoreInput supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <backup supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <async-teardown supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <ps2 supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <sev supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <sgx supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <hyperv supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='features'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>relaxed</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vapic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>spinlocks</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vpindex</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>runtime</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>synic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>stimer</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>reset</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vendor_id</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>frequencies</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>reenlightenment</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tlbflush</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>ipi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>avic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>emsr_bitmap</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>xmm_input</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <defaults>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <spinlocks>4095</spinlocks>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <stimer_direct>on</stimer_direct>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </defaults>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </hyperv>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <launchSecurity supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='sectype'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tdx</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </launchSecurity>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </features>
Dec 09 16:21:47 compute-0 nova_compute[242427]: </domainCapabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 16:21:47 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.833 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 09 16:21:47 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.837 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 09 16:21:47 compute-0 nova_compute[242427]: <domainCapabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <domain>kvm</domain>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <arch>x86_64</arch>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <vcpu max='240'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <iothreads supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <os supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <enum name='firmware'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <loader supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>rom</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pflash</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='readonly'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>yes</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>no</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='secure'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>no</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </loader>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </os>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='host-passthrough' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='hostPassthroughMigratable'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>on</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>off</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='maximum' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='maximumMigratable'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>on</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>off</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='host-model' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <vendor>AMD</vendor>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='x2apic'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='hypervisor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='stibp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='overflow-recov'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='succor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='lbrv'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc-scale'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='flushbyasid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='pause-filter'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='pfthreshold'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='disable' name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='custom' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cooperlake-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Denverton-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Dhyana-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Genoa'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='auto-ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='auto-ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='EPYC-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-128'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-256'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx10-512'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Haswell-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v6'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v7'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='KnightsMill'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512er'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512pf'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='KnightsMill-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512er'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512pf'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G4-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tbm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Opteron_G5-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tbm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SierraForest'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cmpccxadd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='SierraForest-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ifma'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cmpccxadd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v5'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='athlon'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='athlon-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='core2duo'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='core2duo-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='coreduo'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='coreduo-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='n270'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='n270-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='phenom'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='phenom-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <memoryBacking supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <enum name='sourceType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>file</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>anonymous</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>memfd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </memoryBacking>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <devices>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <disk supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='diskDevice'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>disk</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>cdrom</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>floppy</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>lun</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='bus'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>ide</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>fdc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>scsi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>sata</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-non-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </disk>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <graphics supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vnc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>egl-headless</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dbus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </graphics>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <video supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='modelType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vga</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>cirrus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>none</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>bochs</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>ramfb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </video>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <hostdev supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='mode'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>subsystem</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='startupPolicy'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>default</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>mandatory</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>requisite</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>optional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='subsysType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pci</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>scsi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='capsType'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='pciBackend'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </hostdev>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <rng supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtio-non-transitional</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>random</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>egd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>builtin</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </rng>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <filesystem supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='driverType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>path</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>handle</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>virtiofs</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </filesystem>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <tpm supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tpm-tis</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tpm-crb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>emulator</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>external</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendVersion'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>2.0</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </tpm>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <redirdev supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='bus'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </redirdev>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <channel supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pty</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>unix</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </channel>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <crypto supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>qemu</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>builtin</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </crypto>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <interface supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='backendType'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>default</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>passt</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </interface>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <panic supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>isa</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>hyperv</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </panic>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <console supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>null</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vc</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pty</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dev</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>file</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pipe</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>stdio</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>udp</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tcp</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>unix</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>qemu-vdagent</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>dbus</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </console>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </devices>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <features>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <gic supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <vmcoreinfo supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <genid supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <backingStoreInput supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <backup supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <async-teardown supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <ps2 supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <sev supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <sgx supported='no'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <hyperv supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='features'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>relaxed</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vapic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>spinlocks</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vpindex</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>runtime</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>synic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>stimer</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>reset</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>vendor_id</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>frequencies</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>reenlightenment</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tlbflush</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>ipi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>avic</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>emsr_bitmap</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>xmm_input</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <defaults>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <spinlocks>4095</spinlocks>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <stimer_direct>on</stimer_direct>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </defaults>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </hyperv>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <launchSecurity supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='sectype'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>tdx</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </launchSecurity>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </features>
Dec 09 16:21:47 compute-0 nova_compute[242427]: </domainCapabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 16:21:47 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.905 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 09 16:21:47 compute-0 nova_compute[242427]: <domainCapabilities>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <domain>kvm</domain>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <arch>x86_64</arch>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <vcpu max='4096'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <iothreads supported='yes'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <os supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <enum name='firmware'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>efi</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <loader supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>rom</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>pflash</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='readonly'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>yes</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>no</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='secure'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>yes</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>no</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </loader>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   </os>
Dec 09 16:21:47 compute-0 nova_compute[242427]:   <cpu>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='host-passthrough' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='hostPassthroughMigratable'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>on</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>off</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='maximum' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <enum name='maximumMigratable'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>on</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <value>off</value>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='host-model' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <vendor>AMD</vendor>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='x2apic'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='hypervisor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='stibp'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='overflow-recov'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='succor'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='ibrs'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='lbrv'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='tsc-scale'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='flushbyasid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='pause-filter'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='pfthreshold'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <feature policy='disable' name='xsaves'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:47 compute-0 nova_compute[242427]:     <mode name='custom' supported='yes'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v1'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v2'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v3'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Broadwell-v4'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 16:21:47 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:47 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Cooperlake'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Cooperlake-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Cooperlake-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Denverton'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Denverton-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Denverton-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Denverton-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Dhyana-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Genoa'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='auto-ibrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='auto-ibrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Milan-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amd-psfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='stibp-always-on'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-Rome-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='EPYC-v4'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='GraniteRapids-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx10'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx10-128'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx10-256'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx10-512'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='prefetchiti'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Haswell'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Haswell-IBRS'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Haswell-noTSX'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Haswell-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Haswell-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Haswell-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Haswell-v4'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v4'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v5'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v6'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Icelake-Server-v7'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='IvyBridge'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-IBRS'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='IvyBridge-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='KnightsMill'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512er'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512pf'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='KnightsMill-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512er'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512pf'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Opteron_G4'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Opteron_G4-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Opteron_G5'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tbm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Opteron_G5-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fma4'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tbm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xop'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='SapphireRapids-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='amx-tile'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-bf16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-fp16'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bitalg'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrc'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fzrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='la57'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='taa-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xfd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='SierraForest'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cmpccxadd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='SierraForest-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-ifma'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cmpccxadd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fbsdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='fsrs'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ibrs-all'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mcdt-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pbrsb-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='psdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='serialize'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vaes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Client-v4'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='hle'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='rtm'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v4'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Skylake-Server-v5'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512bw'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512cd'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512dq'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512f'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='avx512vl'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='invpcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pcid'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='pku'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Snowridge'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='mpx'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v2'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v3'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='core-capability'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='split-lock-detect'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='Snowridge-v4'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='cldemote'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='erms'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='gfni'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdir64b'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='movdiri'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='xsaves'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='athlon'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='athlon-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='core2duo'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='core2duo-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='coreduo'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='coreduo-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='n270'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='n270-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='ss'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='phenom'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <blockers model='phenom-v1'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='3dnow'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <feature name='3dnowext'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </blockers>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </mode>
Dec 09 16:21:48 compute-0 nova_compute[242427]:   </cpu>
Dec 09 16:21:48 compute-0 nova_compute[242427]:   <memoryBacking supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <enum name='sourceType'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <value>file</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <value>anonymous</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <value>memfd</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:   </memoryBacking>
Dec 09 16:21:48 compute-0 nova_compute[242427]:   <devices>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <disk supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='diskDevice'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>disk</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>cdrom</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>floppy</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>lun</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='bus'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>fdc</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>scsi</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>sata</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtio-transitional</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtio-non-transitional</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </disk>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <graphics supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>vnc</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>egl-headless</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>dbus</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </graphics>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <video supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='modelType'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>vga</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>cirrus</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>none</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>bochs</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>ramfb</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </video>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <hostdev supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='mode'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>subsystem</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='startupPolicy'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>default</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>mandatory</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>requisite</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>optional</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='subsysType'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>pci</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>scsi</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='capsType'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='pciBackend'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </hostdev>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <rng supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtio</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtio-transitional</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtio-non-transitional</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>random</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>egd</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>builtin</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </rng>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <filesystem supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='driverType'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>path</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>handle</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>virtiofs</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </filesystem>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <tpm supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>tpm-tis</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>tpm-crb</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>emulator</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>external</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='backendVersion'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>2.0</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </tpm>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <redirdev supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='bus'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>usb</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </redirdev>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <channel supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>pty</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>unix</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </channel>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <crypto supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='model'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>qemu</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='backendModel'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>builtin</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </crypto>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <interface supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='backendType'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>default</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>passt</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </interface>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <panic supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='model'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>isa</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>hyperv</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </panic>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <console supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='type'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>null</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>vc</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>pty</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>dev</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>file</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>pipe</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>stdio</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>udp</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>tcp</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>unix</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>qemu-vdagent</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>dbus</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </console>
Dec 09 16:21:48 compute-0 nova_compute[242427]:   </devices>
Dec 09 16:21:48 compute-0 nova_compute[242427]:   <features>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <gic supported='no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <vmcoreinfo supported='yes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <genid supported='yes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <backingStoreInput supported='yes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <backup supported='yes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <async-teardown supported='yes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <ps2 supported='yes'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <sev supported='no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <sgx supported='no'/>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <hyperv supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='features'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>relaxed</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>vapic</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>spinlocks</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>vpindex</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>runtime</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>synic</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>stimer</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>reset</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>vendor_id</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>frequencies</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>reenlightenment</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>tlbflush</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>ipi</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>avic</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>emsr_bitmap</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>xmm_input</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <defaults>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <spinlocks>4095</spinlocks>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <stimer_direct>on</stimer_direct>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </defaults>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </hyperv>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     <launchSecurity supported='yes'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       <enum name='sectype'>
Dec 09 16:21:48 compute-0 nova_compute[242427]:         <value>tdx</value>
Dec 09 16:21:48 compute-0 nova_compute[242427]:       </enum>
Dec 09 16:21:48 compute-0 nova_compute[242427]:     </launchSecurity>
Dec 09 16:21:48 compute-0 nova_compute[242427]:   </features>
Dec 09 16:21:48 compute-0 nova_compute[242427]: </domainCapabilities>
Dec 09 16:21:48 compute-0 nova_compute[242427]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.985 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.985 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.985 242431 DEBUG nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.985 242431 INFO nova.virt.libvirt.host [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Secure Boot support detected
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.987 242431 INFO nova.virt.libvirt.driver [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.987 242431 INFO nova.virt.libvirt.driver [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:47.997 242431 DEBUG nova.virt.libvirt.driver [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.031 242431 INFO nova.virt.node [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Determined node identity ca130087-db63-46e1-b278-a80bb66e6865 from /var/lib/nova/compute_id
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.049 242431 WARNING nova.compute.manager [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Compute nodes ['ca130087-db63-46e1-b278-a80bb66e6865'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.098 242431 INFO nova.compute.manager [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.143 242431 WARNING nova.compute.manager [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.144 242431 DEBUG oslo_concurrency.lockutils [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.144 242431 DEBUG oslo_concurrency.lockutils [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.144 242431 DEBUG oslo_concurrency.lockutils [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.144 242431 DEBUG nova.compute.resource_tracker [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.145 242431 DEBUG oslo_concurrency.processutils [None req-216f1dcf-047e-450e-8cfa-765a8fee08b0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:21:48 compute-0 python3.9[243284]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 16:21:48 compute-0 systemd[1]: Stopping nova_compute container...
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.294 242431 DEBUG oslo_concurrency.lockutils [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.295 242431 DEBUG oslo_concurrency.lockutils [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 16:21:48 compute-0 nova_compute[242427]: 2025-12-09 16:21:48.295 242431 DEBUG oslo_concurrency.lockutils [None req-b8821bcd-f0eb-41d1-bbd9-5e291f63a85a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 16:21:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:48 compute-0 virtqemud[243015]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 09 16:21:48 compute-0 virtqemud[243015]: hostname: compute-0
Dec 09 16:21:48 compute-0 virtqemud[243015]: End of file while reading data: Input/output error
Dec 09 16:21:48 compute-0 systemd[1]: libpod-9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce.scope: Deactivated successfully.
Dec 09 16:21:48 compute-0 systemd[1]: libpod-9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce.scope: Consumed 3.189s CPU time.
Dec 09 16:21:48 compute-0 podman[243289]: 2025-12-09 16:21:48.703793719 +0000 UTC m=+0.450434127 container died 9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute)
Dec 09 16:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce-userdata-shm.mount: Deactivated successfully.
Dec 09 16:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6-merged.mount: Deactivated successfully.
Dec 09 16:21:48 compute-0 podman[243323]: 2025-12-09 16:21:48.850431945 +0000 UTC m=+0.129782278 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 09 16:21:49 compute-0 sudo[243361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:21:49 compute-0 sudo[243361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:49 compute-0 sudo[243361]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:49 compute-0 sudo[243386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:21:49 compute-0 sudo[243386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:49 compute-0 podman[243289]: 2025-12-09 16:21:49.587619767 +0000 UTC m=+1.334260175 container cleanup 9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 09 16:21:49 compute-0 podman[243289]: nova_compute
Dec 09 16:21:49 compute-0 podman[243425]: nova_compute
Dec 09 16:21:49 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 09 16:21:49 compute-0 systemd[1]: Stopped nova_compute container.
Dec 09 16:21:49 compute-0 systemd[1]: Starting nova_compute container...
Dec 09 16:21:49 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425d596bb7daad543f9d5011c58c09bcb5d964e2cdbbdbfcaf1ccef172b7cf6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:49 compute-0 podman[243436]: 2025-12-09 16:21:49.759347556 +0000 UTC m=+0.094145396 container init 9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 09 16:21:49 compute-0 podman[243436]: 2025-12-09 16:21:49.769549836 +0000 UTC m=+0.104347666 container start 9f2fa752ba80f2edb6a6ed5e7e6142147c4b695355174f47652b45554431a9ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 09 16:21:49 compute-0 nova_compute[243452]: + sudo -E kolla_set_configs
Dec 09 16:21:49 compute-0 podman[243436]: nova_compute
Dec 09 16:21:49 compute-0 systemd[1]: Started nova_compute container.
Dec 09 16:21:49 compute-0 sudo[243282]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:49 compute-0 ceph-mon[75222]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:49 compute-0 sudo[243386]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Validating config file
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying service configuration files
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /etc/ceph
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Creating directory /etc/ceph
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Writing out command to execute
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 09 16:21:49 compute-0 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 09 16:21:49 compute-0 nova_compute[243452]: ++ cat /run_command
Dec 09 16:21:49 compute-0 nova_compute[243452]: + CMD=nova-compute
Dec 09 16:21:49 compute-0 nova_compute[243452]: + ARGS=
Dec 09 16:21:49 compute-0 nova_compute[243452]: + sudo kolla_copy_cacerts
Dec 09 16:21:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:21:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:21:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:21:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:21:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:21:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:21:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:21:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:21:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:21:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:21:49 compute-0 nova_compute[243452]: + [[ ! -n '' ]]
Dec 09 16:21:49 compute-0 nova_compute[243452]: + . kolla_extend_start
Dec 09 16:21:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:21:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:21:49 compute-0 nova_compute[243452]: + echo 'Running command: '\''nova-compute'\'''
Dec 09 16:21:49 compute-0 nova_compute[243452]: Running command: 'nova-compute'
Dec 09 16:21:49 compute-0 nova_compute[243452]: + umask 0022
Dec 09 16:21:49 compute-0 nova_compute[243452]: + exec nova-compute
Dec 09 16:21:49 compute-0 sudo[243505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:21:49 compute-0 sudo[243505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:49 compute-0 sudo[243505]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:50 compute-0 sudo[243530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:21:50 compute-0 sudo[243530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:50 compute-0 podman[243666]: 2025-12-09 16:21:50.354868284 +0000 UTC m=+0.061945751 container create 39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 09 16:21:50 compute-0 sudo[243706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyorcdjtefvaarnkwtouzqadousbvora ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765297310.065596-1566-66357895180321/AnsiballZ_podman_container.py'
Dec 09 16:21:50 compute-0 sudo[243706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:21:50 compute-0 systemd[1]: Started libpod-conmon-39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650.scope.
Dec 09 16:21:50 compute-0 podman[243666]: 2025-12-09 16:21:50.324524692 +0000 UTC m=+0.031602249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:21:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:50 compute-0 podman[243666]: 2025-12-09 16:21:50.453664841 +0000 UTC m=+0.160742318 container init 39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:21:50 compute-0 podman[243666]: 2025-12-09 16:21:50.462632526 +0000 UTC m=+0.169709983 container start 39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 16:21:50 compute-0 podman[243666]: 2025-12-09 16:21:50.46666638 +0000 UTC m=+0.173743857 container attach 39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:21:50 compute-0 interesting_satoshi[243711]: 167 167
Dec 09 16:21:50 compute-0 systemd[1]: libpod-39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650.scope: Deactivated successfully.
Dec 09 16:21:50 compute-0 podman[243666]: 2025-12-09 16:21:50.468492512 +0000 UTC m=+0.175569969 container died 39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_satoshi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:21:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cce274bc856adfe811fd800a7a14b15db3b666ad6b1e6b949efe3ee422000e6d-merged.mount: Deactivated successfully.
Dec 09 16:21:50 compute-0 podman[243666]: 2025-12-09 16:21:50.502255741 +0000 UTC m=+0.209333198 container remove 39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:21:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:50 compute-0 systemd[1]: libpod-conmon-39e53218b94fd69401b7f39dfbec7ac20e52c48cbde6f30656a02d7191e32650.scope: Deactivated successfully.
Dec 09 16:21:50 compute-0 python3.9[243708]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 09 16:21:50 compute-0 podman[243737]: 2025-12-09 16:21:50.710546608 +0000 UTC m=+0.061391645 container create 22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_spence, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:21:50 compute-0 systemd[1]: Started libpod-conmon-22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd.scope.
Dec 09 16:21:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592d0493c9eab04614963c5844097a3b6b607a507778b13c225f5c0d423a3ade/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592d0493c9eab04614963c5844097a3b6b607a507778b13c225f5c0d423a3ade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592d0493c9eab04614963c5844097a3b6b607a507778b13c225f5c0d423a3ade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592d0493c9eab04614963c5844097a3b6b607a507778b13c225f5c0d423a3ade/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592d0493c9eab04614963c5844097a3b6b607a507778b13c225f5c0d423a3ade/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:50 compute-0 podman[243737]: 2025-12-09 16:21:50.690861539 +0000 UTC m=+0.041706556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:21:50 compute-0 podman[243737]: 2025-12-09 16:21:50.794860454 +0000 UTC m=+0.145705501 container init 22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:21:50 compute-0 podman[243737]: 2025-12-09 16:21:50.80423648 +0000 UTC m=+0.155081487 container start 22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:21:50 compute-0 podman[243737]: 2025-12-09 16:21:50.816615292 +0000 UTC m=+0.167460289 container attach 22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:21:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:21:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:21:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:21:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:21:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:21:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:21:50 compute-0 systemd[1]: Started libpod-conmon-bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163.scope.
Dec 09 16:21:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70eaa46d1f9556a8749b6274bdb290fb032501f14f8f58629a0b5737d8480d8/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70eaa46d1f9556a8749b6274bdb290fb032501f14f8f58629a0b5737d8480d8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70eaa46d1f9556a8749b6274bdb290fb032501f14f8f58629a0b5737d8480d8/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:50 compute-0 podman[243768]: 2025-12-09 16:21:50.909076818 +0000 UTC m=+0.152028079 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 09 16:21:50 compute-0 podman[243775]: 2025-12-09 16:21:50.918667501 +0000 UTC m=+0.149706024 container init bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 09 16:21:50 compute-0 podman[243775]: 2025-12-09 16:21:50.927840792 +0000 UTC m=+0.158879325 container start bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:21:50 compute-0 python3.9[243708]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Applying nova statedir ownership
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 09 16:21:50 compute-0 nova_compute_init[243819]: INFO:nova_statedir:Nova statedir ownership complete
Dec 09 16:21:51 compute-0 systemd[1]: libpod-bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163.scope: Deactivated successfully.
Dec 09 16:21:51 compute-0 podman[243836]: 2025-12-09 16:21:51.04536468 +0000 UTC m=+0.023998052 container died bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Dec 09 16:21:51 compute-0 podman[243836]: 2025-12-09 16:21:51.097858012 +0000 UTC m=+0.076491394 container cleanup bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:21:51 compute-0 sudo[243706]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:51 compute-0 systemd[1]: libpod-conmon-bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163.scope: Deactivated successfully.
Dec 09 16:21:51 compute-0 strange_spence[243773]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:21:51 compute-0 strange_spence[243773]: --> All data devices are unavailable
Dec 09 16:21:51 compute-0 systemd[1]: libpod-22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd.scope: Deactivated successfully.
Dec 09 16:21:51 compute-0 podman[243737]: 2025-12-09 16:21:51.356401825 +0000 UTC m=+0.707246822 container died 22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e70eaa46d1f9556a8749b6274bdb290fb032501f14f8f58629a0b5737d8480d8-merged.mount: Deactivated successfully.
Dec 09 16:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc119f4700c999109cd900087980eaf196570400e85a788d7b2a75fea4b50163-userdata-shm.mount: Deactivated successfully.
Dec 09 16:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-592d0493c9eab04614963c5844097a3b6b607a507778b13c225f5c0d423a3ade-merged.mount: Deactivated successfully.
Dec 09 16:21:51 compute-0 podman[243737]: 2025-12-09 16:21:51.41429444 +0000 UTC m=+0.765139437 container remove 22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec 09 16:21:51 compute-0 systemd[1]: libpod-conmon-22605286acff25ef68168a9c48e8d6dea9e902359dc4f328f24d720e1caa70fd.scope: Deactivated successfully.
Dec 09 16:21:51 compute-0 sudo[243530]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:51 compute-0 sudo[243911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:21:51 compute-0 sshd-session[213941]: Connection closed by 192.168.122.30 port 43022
Dec 09 16:21:51 compute-0 sudo[243911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:51 compute-0 sshd-session[213923]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:21:51 compute-0 sudo[243911]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:51 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 09 16:21:51 compute-0 systemd[1]: session-50.scope: Consumed 2min 19.907s CPU time.
Dec 09 16:21:51 compute-0 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Dec 09 16:21:51 compute-0 systemd-logind[786]: Removed session 50.
Dec 09 16:21:51 compute-0 sudo[243936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:21:51 compute-0 sudo[243936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:51 compute-0 ceph-mon[75222]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:51 compute-0 nova_compute[243452]: 2025-12-09 16:21:51.857 243461 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 16:21:51 compute-0 nova_compute[243452]: 2025-12-09 16:21:51.858 243461 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 16:21:51 compute-0 nova_compute[243452]: 2025-12-09 16:21:51.858 243461 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 16:21:51 compute-0 nova_compute[243452]: 2025-12-09 16:21:51.858 243461 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 09 16:21:51 compute-0 podman[243974]: 2025-12-09 16:21:51.894192403 +0000 UTC m=+0.038697810 container create ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:21:51 compute-0 systemd[1]: Started libpod-conmon-ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8.scope.
Dec 09 16:21:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:51 compute-0 podman[243974]: 2025-12-09 16:21:51.878525198 +0000 UTC m=+0.023030625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:21:51 compute-0 podman[243974]: 2025-12-09 16:21:51.988324188 +0000 UTC m=+0.132829655 container init ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:21:51 compute-0 podman[243974]: 2025-12-09 16:21:51.995310916 +0000 UTC m=+0.139816333 container start ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Dec 09 16:21:51 compute-0 eloquent_robinson[243991]: 167 167
Dec 09 16:21:51 compute-0 systemd[1]: libpod-ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8.scope: Deactivated successfully.
Dec 09 16:21:52 compute-0 podman[243974]: 2025-12-09 16:21:52.000450092 +0000 UTC m=+0.144955539 container attach ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:21:52 compute-0 podman[243974]: 2025-12-09 16:21:52.000925906 +0000 UTC m=+0.145431333 container died ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.018 243461 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeed89bc4ee77061f40a537e1c250b452f0fe53685e6ce79259d878bf6682dd8-merged.mount: Deactivated successfully.
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.038 243461 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.039 243461 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 09 16:21:52 compute-0 podman[243974]: 2025-12-09 16:21:52.049572248 +0000 UTC m=+0.194077665 container remove ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:21:52 compute-0 systemd[1]: libpod-conmon-ee42bdb2c3916bb84e9f35bf77194158fe46acb8c42bbac2fcd27d69ab239ef8.scope: Deactivated successfully.
Dec 09 16:21:52 compute-0 podman[244015]: 2025-12-09 16:21:52.265918344 +0000 UTC m=+0.060750197 container create b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:21:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:52 compute-0 systemd[1]: Started libpod-conmon-b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b.scope.
Dec 09 16:21:52 compute-0 podman[244015]: 2025-12-09 16:21:52.243186608 +0000 UTC m=+0.038018481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:21:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f879d8d859b66d068e2bc3e7eb5313326366e2372e65d85a5a2367ae2ac87169/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f879d8d859b66d068e2bc3e7eb5313326366e2372e65d85a5a2367ae2ac87169/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f879d8d859b66d068e2bc3e7eb5313326366e2372e65d85a5a2367ae2ac87169/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f879d8d859b66d068e2bc3e7eb5313326366e2372e65d85a5a2367ae2ac87169/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:52 compute-0 podman[244015]: 2025-12-09 16:21:52.373983204 +0000 UTC m=+0.168815107 container init b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:21:52 compute-0 podman[244015]: 2025-12-09 16:21:52.386795818 +0000 UTC m=+0.181628031 container start b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:21:52 compute-0 podman[244015]: 2025-12-09 16:21:52.391281955 +0000 UTC m=+0.186113878 container attach b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.612 243461 INFO nova.virt.driver [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 09 16:21:52 compute-0 interesting_jones[244031]: {
Dec 09 16:21:52 compute-0 interesting_jones[244031]:     "0": [
Dec 09 16:21:52 compute-0 interesting_jones[244031]:         {
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "devices": [
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "/dev/loop3"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             ],
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_name": "ceph_lv0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_size": "21470642176",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "name": "ceph_lv0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "tags": {
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cluster_name": "ceph",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.crush_device_class": "",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.encrypted": "0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.objectstore": "bluestore",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osd_id": "0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.type": "block",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.vdo": "0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.with_tpm": "0"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             },
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "type": "block",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "vg_name": "ceph_vg0"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:         }
Dec 09 16:21:52 compute-0 interesting_jones[244031]:     ],
Dec 09 16:21:52 compute-0 interesting_jones[244031]:     "1": [
Dec 09 16:21:52 compute-0 interesting_jones[244031]:         {
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "devices": [
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "/dev/loop4"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             ],
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_name": "ceph_lv1",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_size": "21470642176",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "name": "ceph_lv1",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "tags": {
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cluster_name": "ceph",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.crush_device_class": "",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.encrypted": "0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.objectstore": "bluestore",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osd_id": "1",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.type": "block",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.vdo": "0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.with_tpm": "0"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             },
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "type": "block",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "vg_name": "ceph_vg1"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:         }
Dec 09 16:21:52 compute-0 interesting_jones[244031]:     ],
Dec 09 16:21:52 compute-0 interesting_jones[244031]:     "2": [
Dec 09 16:21:52 compute-0 interesting_jones[244031]:         {
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "devices": [
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "/dev/loop5"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             ],
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_name": "ceph_lv2",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_size": "21470642176",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "name": "ceph_lv2",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "tags": {
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.cluster_name": "ceph",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.crush_device_class": "",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.encrypted": "0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.objectstore": "bluestore",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osd_id": "2",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.type": "block",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.vdo": "0",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:                 "ceph.with_tpm": "0"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             },
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "type": "block",
Dec 09 16:21:52 compute-0 interesting_jones[244031]:             "vg_name": "ceph_vg2"
Dec 09 16:21:52 compute-0 interesting_jones[244031]:         }
Dec 09 16:21:52 compute-0 interesting_jones[244031]:     ]
Dec 09 16:21:52 compute-0 interesting_jones[244031]: }
Dec 09 16:21:52 compute-0 systemd[1]: libpod-b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b.scope: Deactivated successfully.
Dec 09 16:21:52 compute-0 podman[244015]: 2025-12-09 16:21:52.726424076 +0000 UTC m=+0.521255899 container died b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f879d8d859b66d068e2bc3e7eb5313326366e2372e65d85a5a2367ae2ac87169-merged.mount: Deactivated successfully.
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.756 243461 INFO nova.compute.provider_config [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 09 16:21:52 compute-0 podman[244015]: 2025-12-09 16:21:52.766991809 +0000 UTC m=+0.561823622 container remove b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jones, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.773 243461 DEBUG oslo_concurrency.lockutils [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.773 243461 DEBUG oslo_concurrency.lockutils [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.773 243461 DEBUG oslo_concurrency.lockutils [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.773 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.774 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.774 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.774 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.774 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.774 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.774 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.775 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.775 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.775 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.775 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.775 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.776 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.776 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.776 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.776 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.776 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.776 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.777 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.777 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.777 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.777 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.777 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.778 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.778 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.778 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.778 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.778 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.778 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.779 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.779 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.779 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.779 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.779 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.779 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.779 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.780 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.780 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.780 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.780 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.780 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.780 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.780 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.781 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 systemd[1]: libpod-conmon-b22ac1591c9d30f1fe627eb22a8dc5e10bf46fd92110b19f9db1fb4cbee6175b.scope: Deactivated successfully.
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.781 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.781 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.781 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.781 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.781 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.781 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.782 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.782 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.782 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.782 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.782 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.782 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.783 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.783 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.783 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.783 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.783 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.783 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.783 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.784 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.784 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.784 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.784 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.784 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.784 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.784 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.784 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.785 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.785 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.785 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.785 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.785 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.785 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.786 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.786 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.786 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.786 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.786 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.786 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.787 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.787 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.787 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.787 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.787 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.787 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.787 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.788 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.788 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.788 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.788 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.788 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.788 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.788 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.789 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.789 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.789 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.789 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.789 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.789 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.789 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.790 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.790 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.790 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.790 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.790 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.790 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.790 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.791 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.791 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.791 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.791 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.791 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.791 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.791 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.792 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.792 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.792 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.792 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.792 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.792 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.792 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.793 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.793 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.793 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.793 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.793 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.793 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.793 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.794 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.794 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.794 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.794 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.794 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.794 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.794 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.795 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.795 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.795 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.796 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.796 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.796 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.796 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.796 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.796 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.797 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.797 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.797 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.797 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.797 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.797 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.797 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.798 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.798 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.798 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.798 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.798 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.798 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.799 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.799 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.799 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.799 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.799 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.799 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.800 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.800 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.800 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.800 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.800 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.800 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.800 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.801 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.801 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.801 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.801 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.801 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.801 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.801 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.802 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.802 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.802 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.802 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.802 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.802 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.802 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.803 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.803 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.803 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.803 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.803 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.803 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.803 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.804 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.804 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.804 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.804 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.804 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.804 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.805 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.805 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.805 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.805 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.805 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.805 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.805 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.806 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.806 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.806 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.806 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.806 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.807 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.807 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.807 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.807 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.807 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.807 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.807 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.808 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.808 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.808 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.808 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.808 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.808 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.809 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.809 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.809 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.809 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.809 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.809 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.809 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.810 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.810 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.810 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.810 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.810 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.810 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.810 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.811 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.811 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.811 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.811 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.811 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.811 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.811 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.812 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.812 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.812 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.812 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.812 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.812 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.812 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.813 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.813 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.813 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.813 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.813 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.813 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.813 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.814 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.814 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.814 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.814 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.814 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.814 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.815 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.815 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.815 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.815 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.815 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.815 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.815 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.816 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.816 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.816 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.816 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.816 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.816 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.816 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.817 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.817 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 sudo[243936]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.817 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.817 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.817 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.817 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.817 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.818 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.818 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.818 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.818 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.818 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.818 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.818 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.819 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.819 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.819 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.819 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.819 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.819 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.819 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.820 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.820 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.820 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.820 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.820 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.821 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.821 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.821 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.821 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.821 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.822 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.822 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.822 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.822 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.822 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.822 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.823 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.823 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.823 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.823 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.823 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.823 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.823 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.824 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.824 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.824 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.824 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.824 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.824 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.824 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.825 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.825 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.825 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.825 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.825 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.825 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.826 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.826 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.826 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.826 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.826 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.826 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.827 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.827 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.827 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.827 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.827 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.827 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.827 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.828 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.828 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.828 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.828 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.828 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.829 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.829 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.829 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.829 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.829 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.829 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.829 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.830 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.830 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.830 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.830 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.830 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.830 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.830 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.831 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.831 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.831 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.831 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.831 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.831 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.832 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.832 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.832 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.832 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.832 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.832 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.832 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.833 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.833 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.833 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.833 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.833 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.833 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.834 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.834 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.834 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.834 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.834 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.834 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.834 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.835 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.835 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.835 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.835 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.835 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.835 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.836 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.836 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.836 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.836 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.836 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.836 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.837 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.837 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.837 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.837 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.837 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.837 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.837 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.837 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.838 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.838 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.838 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.838 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.838 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.838 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.838 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.839 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.839 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.839 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.839 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.839 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.839 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.839 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.840 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.840 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.840 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.840 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.840 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.840 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.841 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.841 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.841 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.841 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.841 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.841 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.842 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.842 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.842 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.842 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.842 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.842 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.843 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.843 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.843 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.843 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.843 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.843 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.843 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.844 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.844 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.844 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.844 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.844 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.844 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.844 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.845 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.845 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.845 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.845 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.845 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.845 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.845 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.846 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.846 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.846 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.846 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.846 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.846 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.846 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.847 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.847 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.847 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.847 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.847 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.847 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.847 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.848 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.848 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.848 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.848 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.848 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.848 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.849 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.849 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.849 243461 WARNING oslo_config.cfg [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 09 16:21:52 compute-0 nova_compute[243452]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 09 16:21:52 compute-0 nova_compute[243452]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 09 16:21:52 compute-0 nova_compute[243452]: and ``live_migration_inbound_addr`` respectively.
Dec 09 16:21:52 compute-0 nova_compute[243452]: ).  Its value may be silently ignored in the future.
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.849 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.849 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.850 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.850 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.850 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.850 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.850 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.850 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.850 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.851 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.851 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.851 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.851 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.851 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.851 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.852 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.852 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.852 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.852 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rbd_secret_uuid        = 67f67f44-54fc-54ea-8df0-10931b6ecdaf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.852 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.852 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.852 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.853 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.853 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.853 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.853 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.853 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.853 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.854 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.854 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.854 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.854 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.854 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.855 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.855 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.855 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.855 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.855 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.856 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.856 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.856 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.856 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.856 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.856 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.856 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.857 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.857 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.857 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.857 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.857 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.857 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.858 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.858 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.858 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.858 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.858 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.858 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.859 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.859 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.859 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.859 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.859 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.859 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.860 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.860 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.860 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.860 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.860 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.860 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.860 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.861 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.861 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.861 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.861 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.861 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.861 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.862 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.862 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.862 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.862 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.862 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.862 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.863 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.863 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.863 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.863 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.863 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.864 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.864 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.864 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.864 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.864 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.864 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.865 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.865 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.865 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.865 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.865 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.865 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.865 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.866 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.866 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.866 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.866 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.866 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.866 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.866 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.867 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.867 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.867 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.867 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.867 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.867 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.867 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.868 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.868 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.868 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.868 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.868 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.868 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.868 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.869 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.869 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.869 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.869 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.869 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.869 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.870 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.870 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.870 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.870 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.870 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.870 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.870 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.871 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.871 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.871 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.871 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.871 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.872 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.872 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.872 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.872 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.872 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.872 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.872 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.873 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.873 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.873 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.873 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.873 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.873 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.874 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.874 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.874 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.874 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.874 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.874 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.874 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.875 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.875 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.875 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.875 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.875 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.875 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.875 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.876 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.876 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.876 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 sudo[244054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.876 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.876 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.876 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.876 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.877 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.877 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.877 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.877 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.877 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.877 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.877 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.878 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.878 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.878 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.878 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.878 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.878 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.879 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.879 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.879 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.879 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.879 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.879 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.879 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.880 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.880 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.880 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.880 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.880 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.880 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.880 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.881 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.881 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.881 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.881 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.881 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.881 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.881 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.882 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.882 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.882 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.882 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.882 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.882 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.882 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.883 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.883 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.883 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.883 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 sudo[244054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:52 compute-0 sudo[244054]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.883 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.883 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.883 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.884 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.884 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.884 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.884 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.884 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.884 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.884 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.885 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.885 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.885 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.885 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.885 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.885 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.885 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.885 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.886 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.886 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.886 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.886 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.886 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.886 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.886 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.887 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.887 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.887 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.887 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.887 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.887 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.888 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.888 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.888 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.888 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.888 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.888 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.888 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.889 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.889 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.889 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.889 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.889 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.889 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.889 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.889 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.890 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.890 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.890 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.890 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.890 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.890 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.891 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.891 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.891 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.891 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.891 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.891 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.891 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.891 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.892 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.892 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.892 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.892 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.892 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.892 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.893 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.893 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.893 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.893 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.893 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.893 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.893 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.894 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.894 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.894 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.894 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.894 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.894 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.895 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.895 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.895 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.895 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.895 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.895 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.896 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.896 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.896 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.896 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.896 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.896 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.897 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.897 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.897 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.897 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.897 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.897 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.897 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.898 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.898 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.898 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.898 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.898 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.898 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.899 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.899 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.899 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.899 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.899 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.899 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.899 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.899 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.900 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.900 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.900 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.900 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.900 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.900 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.900 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.901 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.901 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.901 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.901 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.901 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.901 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.901 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.901 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.902 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.902 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.902 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.902 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.902 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.902 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.902 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.903 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.903 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.903 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.903 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.903 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.903 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.903 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.904 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.904 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.904 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.904 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.904 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.904 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.904 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.905 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.905 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.905 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.905 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.905 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.905 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.905 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.906 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.906 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.906 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.906 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.906 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.906 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.906 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.907 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.907 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.907 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.907 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.907 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.907 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.907 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.908 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.908 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.908 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.908 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.908 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.908 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.909 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.909 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.909 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.909 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.909 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.909 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.909 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.910 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.910 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.910 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.910 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.910 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.910 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.911 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.911 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.911 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.911 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.911 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.912 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.912 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.912 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.912 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.912 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.912 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.912 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.912 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.913 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.913 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.913 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.913 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.913 243461 DEBUG oslo_service.service [None req-2e09b26d-0ee2-42a8-8c4f-4df23b08abbb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.914 243461 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.935 243461 INFO nova.virt.node [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Determined node identity ca130087-db63-46e1-b278-a80bb66e6865 from /var/lib/nova/compute_id
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.935 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.936 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.936 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.936 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 09 16:21:52 compute-0 sudo[244079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:21:52 compute-0 sudo[244079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.948 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4781c470a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.955 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4781c470a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.956 243461 INFO nova.virt.libvirt.driver [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Connection event '1' reason 'None'
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.961 243461 INFO nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Libvirt host capabilities <capabilities>
Dec 09 16:21:52 compute-0 nova_compute[243452]: 
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <host>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <uuid>e88682e8-711b-4c73-9c89-1e4f2bbcc348</uuid>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <cpu>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <arch>x86_64</arch>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model>EPYC-Rome-v4</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <vendor>AMD</vendor>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <microcode version='16777317'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <signature family='23' model='49' stepping='0'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='x2apic'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='tsc-deadline'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='osxsave'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='hypervisor'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='tsc_adjust'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='spec-ctrl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='stibp'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='arch-capabilities'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='ssbd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='cmp_legacy'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='topoext'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='virt-ssbd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='lbrv'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='tsc-scale'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='vmcb-clean'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='pause-filter'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='pfthreshold'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='svme-addr-chk'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='rdctl-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='skip-l1dfl-vmentry'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='mds-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature name='pschange-mc-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <pages unit='KiB' size='4'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <pages unit='KiB' size='2048'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <pages unit='KiB' size='1048576'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </cpu>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <power_management>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <suspend_mem/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </power_management>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <iommu support='no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <migration_features>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <live/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <uri_transports>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <uri_transport>tcp</uri_transport>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <uri_transport>rdma</uri_transport>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </uri_transports>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </migration_features>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <topology>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <cells num='1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <cell id='0'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:           <memory unit='KiB'>7864300</memory>
Dec 09 16:21:52 compute-0 nova_compute[243452]:           <pages unit='KiB' size='4'>1966075</pages>
Dec 09 16:21:52 compute-0 nova_compute[243452]:           <pages unit='KiB' size='2048'>0</pages>
Dec 09 16:21:52 compute-0 nova_compute[243452]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 09 16:21:52 compute-0 nova_compute[243452]:           <distances>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <sibling id='0' value='10'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:           </distances>
Dec 09 16:21:52 compute-0 nova_compute[243452]:           <cpus num='8'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:           </cpus>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         </cell>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </cells>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </topology>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <cache>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </cache>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <secmodel>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model>selinux</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <doi>0</doi>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </secmodel>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <secmodel>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model>dac</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <doi>0</doi>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </secmodel>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   </host>
Dec 09 16:21:52 compute-0 nova_compute[243452]: 
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <guest>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <os_type>hvm</os_type>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <arch name='i686'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <wordsize>32</wordsize>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <domain type='qemu'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <domain type='kvm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </arch>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <features>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <pae/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <nonpae/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <acpi default='on' toggle='yes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <apic default='on' toggle='no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <cpuselection/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <deviceboot/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <disksnapshot default='on' toggle='no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <externalSnapshot/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </features>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   </guest>
Dec 09 16:21:52 compute-0 nova_compute[243452]: 
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <guest>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <os_type>hvm</os_type>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <arch name='x86_64'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <wordsize>64</wordsize>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <domain type='qemu'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <domain type='kvm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </arch>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <features>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <acpi default='on' toggle='yes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <apic default='on' toggle='no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <cpuselection/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <deviceboot/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <disksnapshot default='on' toggle='no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <externalSnapshot/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </features>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   </guest>
Dec 09 16:21:52 compute-0 nova_compute[243452]: 
Dec 09 16:21:52 compute-0 nova_compute[243452]: </capabilities>
Dec 09 16:21:52 compute-0 nova_compute[243452]: 
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.970 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 09 16:21:52 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.975 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 09 16:21:52 compute-0 nova_compute[243452]: <domainCapabilities>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <domain>kvm</domain>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <arch>i686</arch>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <vcpu max='240'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <iothreads supported='yes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <os supported='yes'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <enum name='firmware'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <loader supported='yes'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>rom</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>pflash</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <enum name='readonly'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>yes</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>no</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <enum name='secure'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>no</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </loader>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   </os>
Dec 09 16:21:52 compute-0 nova_compute[243452]:   <cpu>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <mode name='host-passthrough' supported='yes'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <enum name='hostPassthroughMigratable'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>on</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>off</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <mode name='maximum' supported='yes'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <enum name='maximumMigratable'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>on</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <value>off</value>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <mode name='host-model' supported='yes'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <vendor>AMD</vendor>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='x2apic'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='hypervisor'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='stibp'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='ssbd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='overflow-recov'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='succor'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='ibrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='lbrv'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc-scale'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='flushbyasid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='pause-filter'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='pfthreshold'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <feature policy='disable' name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:52 compute-0 nova_compute[243452]:     <mode name='custom' supported='yes'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Broadwell'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Broadwell-IBRS'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Broadwell-noTSX'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v3'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v4'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cooperlake'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cooperlake-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Cooperlake-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Denverton'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Denverton-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Denverton-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Denverton-v3'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Dhyana-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Genoa'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='auto-ibrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='auto-ibrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v3'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-v3'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='EPYC-v4'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx10'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx10-128'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx10-256'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx10-512'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Haswell'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Haswell-IBRS'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Haswell-noTSX'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Haswell-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Haswell-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Haswell-v3'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Haswell-v4'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v3'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v4'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v5'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v6'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v7'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='IvyBridge'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-IBRS'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='KnightsMill'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512er'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512pf'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='KnightsMill-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512er'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512pf'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Opteron_G4'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Opteron_G4-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Opteron_G5'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='tbm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='Opteron_G5-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='tbm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v1'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v2'>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 16:21:52 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SierraForest'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cmpccxadd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SierraForest-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cmpccxadd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='athlon'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='athlon-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='core2duo'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='core2duo-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='coreduo'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='coreduo-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='n270'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='n270-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='phenom'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='phenom-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </cpu>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <memoryBacking supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <enum name='sourceType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>file</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>anonymous</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>memfd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </memoryBacking>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <devices>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <disk supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='diskDevice'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>disk</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>cdrom</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>floppy</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>lun</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='bus'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ide</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>fdc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>scsi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>sata</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-non-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </disk>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <graphics supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vnc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>egl-headless</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dbus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </graphics>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <video supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='modelType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vga</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>cirrus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>none</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>bochs</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ramfb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </video>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <hostdev supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='mode'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>subsystem</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='startupPolicy'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>default</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>mandatory</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>requisite</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>optional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='subsysType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pci</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>scsi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='capsType'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='pciBackend'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </hostdev>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <rng supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-non-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>random</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>egd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>builtin</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </rng>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <filesystem supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='driverType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>path</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>handle</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtiofs</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </filesystem>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <tpm supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tpm-tis</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tpm-crb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>emulator</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>external</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendVersion'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>2.0</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </tpm>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <redirdev supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='bus'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </redirdev>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <channel supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pty</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>unix</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </channel>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <crypto supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>qemu</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>builtin</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </crypto>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <interface supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>default</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>passt</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </interface>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <panic supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>isa</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>hyperv</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </panic>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <console supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>null</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pty</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dev</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>file</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pipe</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>stdio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>udp</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tcp</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>unix</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>qemu-vdagent</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dbus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </console>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </devices>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <features>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <gic supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <vmcoreinfo supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <genid supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <backingStoreInput supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <backup supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <async-teardown supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <ps2 supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <sev supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <sgx supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <hyperv supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='features'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>relaxed</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vapic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>spinlocks</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vpindex</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>runtime</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>synic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>stimer</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>reset</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vendor_id</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>frequencies</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>reenlightenment</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tlbflush</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ipi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>avic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>emsr_bitmap</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>xmm_input</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <defaults>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <spinlocks>4095</spinlocks>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <stimer_direct>on</stimer_direct>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </defaults>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </hyperv>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <launchSecurity supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='sectype'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tdx</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </launchSecurity>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </features>
Dec 09 16:21:53 compute-0 nova_compute[243452]: </domainCapabilities>
Dec 09 16:21:53 compute-0 nova_compute[243452]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.981 243461 DEBUG nova.virt.libvirt.volume.mount [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:52.986 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 09 16:21:53 compute-0 nova_compute[243452]: <domainCapabilities>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <domain>kvm</domain>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <arch>i686</arch>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <vcpu max='4096'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <iothreads supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <os supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <enum name='firmware'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <loader supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>rom</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pflash</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='readonly'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>yes</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>no</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='secure'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>no</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </loader>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </os>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <cpu>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='host-passthrough' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='hostPassthroughMigratable'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>on</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>off</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='maximum' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='maximumMigratable'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>on</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>off</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='host-model' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <vendor>AMD</vendor>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='x2apic'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='hypervisor'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='stibp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='overflow-recov'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='succor'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='lbrv'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc-scale'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='flushbyasid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='pause-filter'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='pfthreshold'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='disable' name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='custom' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Dhyana-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Genoa'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='auto-ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='auto-ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-128'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-256'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-512'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v6'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v7'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='KnightsMill'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512er'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512pf'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='KnightsMill-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512er'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512pf'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G4-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tbm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G5-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tbm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SierraForest'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cmpccxadd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SierraForest-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cmpccxadd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='athlon'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='athlon-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='core2duo'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='core2duo-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='coreduo'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='coreduo-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='n270'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='n270-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='phenom'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='phenom-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </cpu>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <memoryBacking supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <enum name='sourceType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>file</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>anonymous</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>memfd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </memoryBacking>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <devices>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <disk supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='diskDevice'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>disk</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>cdrom</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>floppy</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>lun</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='bus'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>fdc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>scsi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>sata</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-non-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </disk>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <graphics supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vnc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>egl-headless</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dbus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </graphics>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <video supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='modelType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vga</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>cirrus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>none</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>bochs</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ramfb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </video>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <hostdev supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='mode'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>subsystem</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='startupPolicy'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>default</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>mandatory</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>requisite</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>optional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='subsysType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pci</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>scsi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='capsType'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='pciBackend'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </hostdev>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <rng supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-non-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>random</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>egd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>builtin</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </rng>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <filesystem supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='driverType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>path</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>handle</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtiofs</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </filesystem>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <tpm supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tpm-tis</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tpm-crb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>emulator</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>external</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendVersion'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>2.0</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </tpm>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <redirdev supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='bus'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </redirdev>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <channel supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pty</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>unix</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </channel>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <crypto supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>qemu</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>builtin</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </crypto>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <interface supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>default</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>passt</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </interface>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <panic supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>isa</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>hyperv</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </panic>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <console supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>null</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pty</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dev</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>file</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pipe</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>stdio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>udp</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tcp</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>unix</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>qemu-vdagent</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dbus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </console>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </devices>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <features>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <gic supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <vmcoreinfo supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <genid supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <backingStoreInput supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <backup supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <async-teardown supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <ps2 supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <sev supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <sgx supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <hyperv supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='features'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>relaxed</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vapic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>spinlocks</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vpindex</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>runtime</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>synic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>stimer</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>reset</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vendor_id</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>frequencies</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>reenlightenment</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tlbflush</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ipi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>avic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>emsr_bitmap</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>xmm_input</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <defaults>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <spinlocks>4095</spinlocks>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <stimer_direct>on</stimer_direct>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </defaults>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </hyperv>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <launchSecurity supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='sectype'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tdx</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </launchSecurity>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </features>
Dec 09 16:21:53 compute-0 nova_compute[243452]: </domainCapabilities>
Dec 09 16:21:53 compute-0 nova_compute[243452]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.015 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.020 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 09 16:21:53 compute-0 nova_compute[243452]: <domainCapabilities>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <domain>kvm</domain>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <arch>x86_64</arch>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <vcpu max='240'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <iothreads supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <os supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <enum name='firmware'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <loader supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>rom</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pflash</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='readonly'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>yes</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>no</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='secure'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>no</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </loader>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </os>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <cpu>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='host-passthrough' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='hostPassthroughMigratable'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>on</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>off</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='maximum' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='maximumMigratable'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>on</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>off</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='host-model' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <vendor>AMD</vendor>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='x2apic'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='hypervisor'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='stibp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='overflow-recov'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='succor'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='lbrv'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc-scale'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='flushbyasid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='pause-filter'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='pfthreshold'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='disable' name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='custom' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Dhyana-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Genoa'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='auto-ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='auto-ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-128'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-256'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-512'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v6'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v7'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='KnightsMill'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512er'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512pf'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='KnightsMill-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512er'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512pf'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G4-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tbm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G5-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tbm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SierraForest'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cmpccxadd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SierraForest-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cmpccxadd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='athlon'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='athlon-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='core2duo'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='core2duo-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='coreduo'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='coreduo-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='n270'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='n270-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='phenom'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='phenom-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </cpu>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <memoryBacking supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <enum name='sourceType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>file</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>anonymous</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>memfd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </memoryBacking>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <devices>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <disk supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='diskDevice'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>disk</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>cdrom</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>floppy</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>lun</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='bus'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ide</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>fdc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>scsi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>sata</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-non-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </disk>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <graphics supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vnc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>egl-headless</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dbus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </graphics>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <video supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='modelType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vga</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>cirrus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>none</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>bochs</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ramfb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </video>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <hostdev supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='mode'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>subsystem</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='startupPolicy'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>default</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>mandatory</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>requisite</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>optional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='subsysType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pci</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>scsi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='capsType'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='pciBackend'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </hostdev>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <rng supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-non-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>random</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>egd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>builtin</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </rng>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <filesystem supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='driverType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>path</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>handle</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtiofs</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </filesystem>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <tpm supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tpm-tis</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tpm-crb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>emulator</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>external</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendVersion'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>2.0</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </tpm>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <redirdev supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='bus'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </redirdev>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <channel supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pty</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>unix</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </channel>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <crypto supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>qemu</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>builtin</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </crypto>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <interface supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>default</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>passt</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </interface>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <panic supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>isa</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>hyperv</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </panic>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <console supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>null</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pty</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dev</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>file</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pipe</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>stdio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>udp</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tcp</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>unix</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>qemu-vdagent</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dbus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </console>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </devices>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <features>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <gic supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <vmcoreinfo supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <genid supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <backingStoreInput supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <backup supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <async-teardown supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <ps2 supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <sev supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <sgx supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <hyperv supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='features'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>relaxed</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vapic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>spinlocks</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vpindex</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>runtime</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>synic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>stimer</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>reset</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vendor_id</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>frequencies</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>reenlightenment</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tlbflush</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ipi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>avic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>emsr_bitmap</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>xmm_input</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <defaults>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <spinlocks>4095</spinlocks>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <stimer_direct>on</stimer_direct>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </defaults>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </hyperv>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <launchSecurity supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='sectype'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tdx</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </launchSecurity>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </features>
Dec 09 16:21:53 compute-0 nova_compute[243452]: </domainCapabilities>
Dec 09 16:21:53 compute-0 nova_compute[243452]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.099 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 09 16:21:53 compute-0 nova_compute[243452]: <domainCapabilities>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <domain>kvm</domain>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <arch>x86_64</arch>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <vcpu max='4096'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <iothreads supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <os supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <enum name='firmware'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>efi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <loader supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>rom</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pflash</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='readonly'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>yes</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>no</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='secure'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>yes</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>no</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </loader>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </os>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <cpu>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='host-passthrough' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='hostPassthroughMigratable'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>on</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>off</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='maximum' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='maximumMigratable'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>on</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>off</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='host-model' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <vendor>AMD</vendor>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='x2apic'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='hypervisor'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='stibp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='overflow-recov'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='succor'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='lbrv'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='tsc-scale'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='flushbyasid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='pause-filter'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='pfthreshold'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <feature policy='disable' name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <mode name='custom' supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Broadwell-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Cooperlake-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Denverton-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Dhyana-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Genoa'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='auto-ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='auto-ibrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Milan-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amd-psfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='no-nested-data-bp'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='null-sel-clr-base'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='stibp-always-on'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-Rome-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='EPYC-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='GraniteRapids-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-128'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-256'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx10-512'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='prefetchiti'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Haswell-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v6'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Icelake-Server-v7'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='IvyBridge-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='KnightsMill'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512er'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512pf'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='KnightsMill-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4fmaps'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-4vnniw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512er'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512pf'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G4-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tbm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Opteron_G5-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fma4'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tbm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xop'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SapphireRapids-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='amx-tile'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-bf16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-fp16'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512-vpopcntdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bitalg'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vbmi2'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrc'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fzrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='la57'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='taa-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='tsx-ldtrk'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xfd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SierraForest'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cmpccxadd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='SierraForest-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ifma'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-ne-convert'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx-vnni-int8'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='bus-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cmpccxadd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fbsdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='fsrs'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ibrs-all'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mcdt-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pbrsb-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='psdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='sbdr-ssdp-no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='serialize'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vaes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='vpclmulqdq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Client-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='hle'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='rtm'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Skylake-Server-v5'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512bw'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512cd'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512dq'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512f'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='avx512vl'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='invpcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pcid'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='pku'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='mpx'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v2'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v3'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='core-capability'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='split-lock-detect'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='Snowridge-v4'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='cldemote'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='erms'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='gfni'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdir64b'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='movdiri'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='xsaves'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='athlon'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='athlon-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='core2duo'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='core2duo-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='coreduo'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='coreduo-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='n270'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='n270-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='ss'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='phenom'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <blockers model='phenom-v1'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnow'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <feature name='3dnowext'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </blockers>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </mode>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </cpu>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <memoryBacking supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <enum name='sourceType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>file</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>anonymous</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <value>memfd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </memoryBacking>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <devices>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <disk supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='diskDevice'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>disk</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>cdrom</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>floppy</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>lun</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='bus'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>fdc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>scsi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>sata</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-non-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </disk>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <graphics supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vnc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>egl-headless</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dbus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </graphics>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <video supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='modelType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vga</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>cirrus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>none</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>bochs</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ramfb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </video>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <hostdev supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='mode'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>subsystem</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='startupPolicy'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>default</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>mandatory</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>requisite</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>optional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='subsysType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pci</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>scsi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='capsType'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='pciBackend'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </hostdev>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <rng supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtio-non-transitional</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>random</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>egd</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>builtin</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </rng>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <filesystem supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='driverType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>path</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>handle</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>virtiofs</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </filesystem>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <tpm supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tpm-tis</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tpm-crb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>emulator</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>external</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendVersion'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>2.0</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </tpm>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <redirdev supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='bus'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>usb</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </redirdev>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <channel supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pty</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>unix</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </channel>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <crypto supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>qemu</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendModel'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>builtin</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </crypto>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <interface supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='backendType'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>default</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>passt</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </interface>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <panic supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='model'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>isa</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>hyperv</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </panic>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <console supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='type'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>null</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vc</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pty</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dev</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>file</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>pipe</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>stdio</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>udp</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tcp</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>unix</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>qemu-vdagent</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>dbus</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </console>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </devices>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   <features>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <gic supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <vmcoreinfo supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <genid supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <backingStoreInput supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <backup supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <async-teardown supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <ps2 supported='yes'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <sev supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <sgx supported='no'/>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <hyperv supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='features'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>relaxed</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vapic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>spinlocks</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vpindex</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>runtime</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>synic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>stimer</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>reset</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>vendor_id</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>frequencies</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>reenlightenment</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tlbflush</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>ipi</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>avic</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>emsr_bitmap</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>xmm_input</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <defaults>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <spinlocks>4095</spinlocks>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <stimer_direct>on</stimer_direct>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </defaults>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </hyperv>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     <launchSecurity supported='yes'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       <enum name='sectype'>
Dec 09 16:21:53 compute-0 nova_compute[243452]:         <value>tdx</value>
Dec 09 16:21:53 compute-0 nova_compute[243452]:       </enum>
Dec 09 16:21:53 compute-0 nova_compute[243452]:     </launchSecurity>
Dec 09 16:21:53 compute-0 nova_compute[243452]:   </features>
Dec 09 16:21:53 compute-0 nova_compute[243452]: </domainCapabilities>
Dec 09 16:21:53 compute-0 nova_compute[243452]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.163 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.163 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.163 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.163 243461 INFO nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Secure Boot support detected
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.165 243461 INFO nova.virt.libvirt.driver [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.165 243461 INFO nova.virt.libvirt.driver [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.175 243461 DEBUG nova.virt.libvirt.driver [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.204 243461 INFO nova.virt.node [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Determined node identity ca130087-db63-46e1-b278-a80bb66e6865 from /var/lib/nova/compute_id
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.235 243461 WARNING nova.compute.manager [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Compute nodes ['ca130087-db63-46e1-b278-a80bb66e6865'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 09 16:21:53 compute-0 podman[244136]: 2025-12-09 16:21:53.2577435 +0000 UTC m=+0.041993424 container create a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.261 243461 INFO nova.compute.manager [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.284 243461 WARNING nova.compute.manager [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.285 243461 DEBUG oslo_concurrency.lockutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.285 243461 DEBUG oslo_concurrency.lockutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.285 243461 DEBUG oslo_concurrency.lockutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.285 243461 DEBUG nova.compute.resource_tracker [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.285 243461 DEBUG oslo_concurrency.processutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:21:53 compute-0 systemd[1]: Started libpod-conmon-a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229.scope.
Dec 09 16:21:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:53 compute-0 podman[244136]: 2025-12-09 16:21:53.236995271 +0000 UTC m=+0.021245195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:21:53 compute-0 podman[244136]: 2025-12-09 16:21:53.342473337 +0000 UTC m=+0.126723291 container init a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:21:53 compute-0 podman[244136]: 2025-12-09 16:21:53.350627829 +0000 UTC m=+0.134877723 container start a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:21:53 compute-0 podman[244136]: 2025-12-09 16:21:53.35418606 +0000 UTC m=+0.138435974 container attach a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:21:53 compute-0 nostalgic_black[244154]: 167 167
Dec 09 16:21:53 compute-0 systemd[1]: libpod-a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229.scope: Deactivated successfully.
Dec 09 16:21:53 compute-0 podman[244136]: 2025-12-09 16:21:53.3580415 +0000 UTC m=+0.142291454 container died a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ffe3f8d6cc8cfc0f368f904ac585cbf350a168e544f317d98b00b1bd9d88dc2-merged.mount: Deactivated successfully.
Dec 09 16:21:53 compute-0 podman[244136]: 2025-12-09 16:21:53.394845505 +0000 UTC m=+0.179095409 container remove a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:21:53 compute-0 systemd[1]: libpod-conmon-a45ee6f1784360a9c3854a47cbbbdb7fd6eb024dfc958a464592388ea93f3229.scope: Deactivated successfully.
Dec 09 16:21:53 compute-0 podman[244196]: 2025-12-09 16:21:53.561276823 +0000 UTC m=+0.046424780 container create 396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_snyder, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:21:53 compute-0 systemd[1]: Started libpod-conmon-396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c.scope.
Dec 09 16:21:53 compute-0 podman[244196]: 2025-12-09 16:21:53.540769641 +0000 UTC m=+0.025917628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:21:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e9aca95312a15013971e16d25438166b6b1b712f236e9d8dfa825b5fd24fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e9aca95312a15013971e16d25438166b6b1b712f236e9d8dfa825b5fd24fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e9aca95312a15013971e16d25438166b6b1b712f236e9d8dfa825b5fd24fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e9aca95312a15013971e16d25438166b6b1b712f236e9d8dfa825b5fd24fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:21:53 compute-0 podman[244196]: 2025-12-09 16:21:53.663755565 +0000 UTC m=+0.148903542 container init 396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_snyder, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 16:21:53 compute-0 podman[244196]: 2025-12-09 16:21:53.672630417 +0000 UTC m=+0.157778364 container start 396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:21:53 compute-0 podman[244196]: 2025-12-09 16:21:53.676655781 +0000 UTC m=+0.161803738 container attach 396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_snyder, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:21:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:21:53 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893159409' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:21:53 compute-0 nova_compute[243452]: 2025-12-09 16:21:53.827 243461 DEBUG oslo_concurrency.processutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:21:53 compute-0 ceph-mon[75222]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:53 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/893159409' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:21:53 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 09 16:21:53 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 09 16:21:54 compute-0 nova_compute[243452]: 2025-12-09 16:21:54.152 243461 WARNING nova.virt.libvirt.driver [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:21:54 compute-0 nova_compute[243452]: 2025-12-09 16:21:54.153 243461 DEBUG nova.compute.resource_tracker [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:21:54 compute-0 nova_compute[243452]: 2025-12-09 16:21:54.153 243461 DEBUG oslo_concurrency.lockutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:21:54 compute-0 nova_compute[243452]: 2025-12-09 16:21:54.153 243461 DEBUG oslo_concurrency.lockutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:21:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:54 compute-0 nova_compute[243452]: 2025-12-09 16:21:54.340 243461 WARNING nova.compute.resource_tracker [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] No compute node record for compute-0.ctlplane.example.com:ca130087-db63-46e1-b278-a80bb66e6865: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host ca130087-db63-46e1-b278-a80bb66e6865 could not be found.
Dec 09 16:21:54 compute-0 nova_compute[243452]: 2025-12-09 16:21:54.360 243461 INFO nova.compute.resource_tracker [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: ca130087-db63-46e1-b278-a80bb66e6865
Dec 09 16:21:54 compute-0 lvm[244316]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:21:54 compute-0 lvm[244316]: VG ceph_vg1 finished
Dec 09 16:21:54 compute-0 lvm[244315]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:21:54 compute-0 lvm[244315]: VG ceph_vg0 finished
Dec 09 16:21:54 compute-0 nova_compute[243452]: 2025-12-09 16:21:54.450 243461 DEBUG nova.compute.resource_tracker [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:21:54 compute-0 nova_compute[243452]: 2025-12-09 16:21:54.451 243461 DEBUG nova.compute.resource_tracker [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:21:54 compute-0 lvm[244318]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:21:54 compute-0 lvm[244318]: VG ceph_vg2 finished
Dec 09 16:21:54 compute-0 hungry_snyder[244212]: {}
Dec 09 16:21:54 compute-0 systemd[1]: libpod-396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c.scope: Deactivated successfully.
Dec 09 16:21:54 compute-0 podman[244196]: 2025-12-09 16:21:54.591264144 +0000 UTC m=+1.076412081 container died 396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_snyder, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:21:54 compute-0 systemd[1]: libpod-396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c.scope: Consumed 1.300s CPU time.
Dec 09 16:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7e9aca95312a15013971e16d25438166b6b1b712f236e9d8dfa825b5fd24fd4-merged.mount: Deactivated successfully.
Dec 09 16:21:54 compute-0 podman[244196]: 2025-12-09 16:21:54.639956457 +0000 UTC m=+1.125104404 container remove 396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:21:54 compute-0 systemd[1]: libpod-conmon-396fb9353096128f1a91b3145fbf1b1a80fa49905735b5479fa38f3bff4dc04c.scope: Deactivated successfully.
Dec 09 16:21:54 compute-0 sudo[244079]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:21:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:21:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:21:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:21:54 compute-0 sudo[244331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:21:54 compute-0 sudo[244331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:21:54 compute-0 sudo[244331]: pam_unix(sudo:session): session closed for user root
Dec 09 16:21:55 compute-0 sshd-session[244356]: Invalid user odoo from 146.190.31.45 port 36462
Dec 09 16:21:55 compute-0 nova_compute[243452]: 2025-12-09 16:21:55.477 243461 INFO nova.scheduler.client.report [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] [req-224ab1b5-7352-44a9-8c91-fd503463dd77] Created resource provider record via placement API for resource provider with UUID ca130087-db63-46e1-b278-a80bb66e6865 and name compute-0.ctlplane.example.com.
Dec 09 16:21:55 compute-0 sshd-session[244356]: Connection closed by invalid user odoo 146.190.31.45 port 36462 [preauth]
Dec 09 16:21:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:21:55 compute-0 ceph-mon[75222]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:21:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:21:55 compute-0 nova_compute[243452]: 2025-12-09 16:21:55.902 243461 DEBUG oslo_concurrency.processutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:21:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:21:56 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4112951880' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.442 243461 DEBUG oslo_concurrency.processutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.449 243461 DEBUG nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 09 16:21:56 compute-0 nova_compute[243452]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.450 243461 INFO nova.virt.libvirt.host [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] kernel doesn't support AMD SEV
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.452 243461 DEBUG nova.compute.provider_tree [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Updating inventory in ProviderTree for provider ca130087-db63-46e1-b278-a80bb66e6865 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.453 243461 DEBUG nova.virt.libvirt.driver [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 09 16:21:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:21:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.519 243461 DEBUG nova.scheduler.client.report [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Updated inventory for provider ca130087-db63-46e1-b278-a80bb66e6865 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.519 243461 DEBUG nova.compute.provider_tree [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Updating resource provider ca130087-db63-46e1-b278-a80bb66e6865 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.520 243461 DEBUG nova.compute.provider_tree [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Updating inventory in ProviderTree for provider ca130087-db63-46e1-b278-a80bb66e6865 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 16:21:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:21:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:21:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:21:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.657 243461 DEBUG nova.compute.provider_tree [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Updating resource provider ca130087-db63-46e1-b278-a80bb66e6865 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.691 243461 DEBUG nova.compute.resource_tracker [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.691 243461 DEBUG oslo_concurrency.lockutils [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.692 243461 DEBUG nova.service [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 09 16:21:56 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4112951880' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.783 243461 DEBUG nova.service [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 09 16:21:56 compute-0 nova_compute[243452]: 2025-12-09 16:21:56.784 243461 DEBUG nova.servicegroup.drivers.db [None req-5ef9b9cc-f00d-446d-8954-5c26105ca24f - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 09 16:21:57 compute-0 ceph-mon[75222]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:21:59 compute-0 ceph-mon[75222]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:01 compute-0 ceph-mon[75222]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:03 compute-0 ceph-mon[75222]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:05 compute-0 podman[244380]: 2025-12-09 16:22:05.636937344 +0000 UTC m=+0.080630812 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 09 16:22:05 compute-0 ceph-mon[75222]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:07 compute-0 ceph-mon[75222]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:09 compute-0 ceph-mon[75222]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:11 compute-0 ceph-mon[75222]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:13 compute-0 ceph-mon[75222]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:22:15 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3283240521' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:22:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:22:15 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3283240521' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:22:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:22:15 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3791139246' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:22:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:22:15 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3791139246' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:22:15 compute-0 ceph-mon[75222]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:15 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3283240521' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:22:15 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3283240521' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:22:15 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3791139246' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:22:15 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3791139246' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:22:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:22:15 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1994506083' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:22:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:22:16 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1994506083' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:22:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:16 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1994506083' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:22:16 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1994506083' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:22:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:22:17.840 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:22:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:22:17.841 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:22:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:22:17.841 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:22:17 compute-0 ceph-mon[75222]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:19 compute-0 ceph-mon[75222]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:20 compute-0 podman[244401]: 2025-12-09 16:22:20.638844137 +0000 UTC m=+0.089840163 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller)
Dec 09 16:22:20 compute-0 ceph-mon[75222]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:21 compute-0 podman[244427]: 2025-12-09 16:22:21.621697889 +0000 UTC m=+0.071731379 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:22:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:23 compute-0 ceph-mon[75222]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:25 compute-0 ceph-mon[75222]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:22:25
Dec 09 16:22:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:22:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:22:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'vms', 'backups', 'default.rgw.meta', 'images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log']
Dec 09 16:22:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:22:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:22:27 compute-0 ceph-mon[75222]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:29 compute-0 ceph-mon[75222]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:31 compute-0 ceph-mon[75222]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:33 compute-0 ceph-mon[75222]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:35 compute-0 ceph-mon[75222]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:36 compute-0 podman[244446]: 2025-12-09 16:22:36.606432277 +0000 UTC m=+0.056848696 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:22:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:22:36 compute-0 ceph-mon[75222]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:38 compute-0 nova_compute[243452]: 2025-12-09 16:22:38.786 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:38 compute-0 nova_compute[243452]: 2025-12-09 16:22:38.813 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:39 compute-0 ceph-mon[75222]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:40 compute-0 sshd-session[244466]: Invalid user odoo from 146.190.31.45 port 57222
Dec 09 16:22:40 compute-0 sshd-session[244466]: Connection closed by invalid user odoo 146.190.31.45 port 57222 [preauth]
Dec 09 16:22:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:41 compute-0 ceph-mon[75222]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:43 compute-0 ceph-mon[75222]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.533110) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297365533146, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1517, "num_deletes": 505, "total_data_size": 1971151, "memory_usage": 2006640, "flush_reason": "Manual Compaction"}
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297365549044, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1941393, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13592, "largest_seqno": 15108, "table_properties": {"data_size": 1934772, "index_size": 3312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16199, "raw_average_key_size": 18, "raw_value_size": 1919559, "raw_average_value_size": 2156, "num_data_blocks": 152, "num_entries": 890, "num_filter_entries": 890, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765297235, "oldest_key_time": 1765297235, "file_creation_time": 1765297365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16043 microseconds, and 5128 cpu microseconds.
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.549146) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1941393 bytes OK
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.549188) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.550662) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.550710) EVENT_LOG_v1 {"time_micros": 1765297365550701, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.550736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1963435, prev total WAL file size 1963435, number of live WAL files 2.
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.551650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1895KB)], [32(7652KB)]
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297365551715, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9777476, "oldest_snapshot_seqno": -1}
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3901 keys, 7791670 bytes, temperature: kUnknown
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297365615091, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7791670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7763262, "index_size": 17546, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 95246, "raw_average_key_size": 24, "raw_value_size": 7690336, "raw_average_value_size": 1971, "num_data_blocks": 743, "num_entries": 3901, "num_filter_entries": 3901, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765297365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.615350) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7791670 bytes
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.616620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.1 rd, 122.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(9.0) write-amplify(4.0) OK, records in: 4924, records dropped: 1023 output_compression: NoCompression
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.616640) EVENT_LOG_v1 {"time_micros": 1765297365616629, "job": 14, "event": "compaction_finished", "compaction_time_micros": 63451, "compaction_time_cpu_micros": 22499, "output_level": 6, "num_output_files": 1, "total_output_size": 7791670, "num_input_records": 4924, "num_output_records": 3901, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297365617041, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297365618172, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.551366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.618203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.618207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.618209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.618211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:22:45 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:22:45.618213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:22:45 compute-0 ceph-mon[75222]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 09 16:22:46 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1174270976' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 09 16:22:46 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 09 16:22:46 compute-0 ceph-mgr[75515]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 09 16:22:46 compute-0 ceph-mgr[75515]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 09 16:22:46 compute-0 ceph-mon[75222]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:46 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1174270976' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 09 16:22:46 compute-0 ceph-mon[75222]: from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 09 16:22:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:49 compute-0 ceph-mon[75222]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:51 compute-0 ceph-mon[75222]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:51 compute-0 podman[244468]: 2025-12-09 16:22:51.638983134 +0000 UTC m=+0.082096152 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:22:51 compute-0 podman[244494]: 2025-12-09 16:22:51.718006987 +0000 UTC m=+0.052928290 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.057 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.058 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.058 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.058 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.070 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.071 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.071 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.071 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.071 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.072 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.072 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.072 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.072 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.092 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.093 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.093 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.093 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.093 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:22:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:52 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:22:52 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434325895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.683 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.876 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.877 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5158MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.877 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:22:52 compute-0 nova_compute[243452]: 2025-12-09 16:22:52.877 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:22:53 compute-0 ceph-mon[75222]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:53 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1434325895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:22:53 compute-0 nova_compute[243452]: 2025-12-09 16:22:53.647 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:22:53 compute-0 nova_compute[243452]: 2025-12-09 16:22:53.647 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:22:53 compute-0 nova_compute[243452]: 2025-12-09 16:22:53.661 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:22:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:22:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4106992628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:22:54 compute-0 nova_compute[243452]: 2025-12-09 16:22:54.182 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:22:54 compute-0 nova_compute[243452]: 2025-12-09 16:22:54.188 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:22:54 compute-0 nova_compute[243452]: 2025-12-09 16:22:54.202 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:22:54 compute-0 nova_compute[243452]: 2025-12-09 16:22:54.228 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:22:54 compute-0 nova_compute[243452]: 2025-12-09 16:22:54.229 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:22:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:54 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4106992628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:22:54 compute-0 sudo[244555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:22:54 compute-0 sudo[244555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:22:54 compute-0 sudo[244555]: pam_unix(sudo:session): session closed for user root
Dec 09 16:22:54 compute-0 sudo[244580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:22:54 compute-0 sudo[244580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:22:55 compute-0 sudo[244580]: pam_unix(sudo:session): session closed for user root
Dec 09 16:22:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:22:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:22:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:22:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:22:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:22:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:22:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:22:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:22:55 compute-0 ceph-mon[75222]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:22:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:22:55 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:22:55 compute-0 sudo[244637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:22:55 compute-0 sudo[244637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:22:55 compute-0 sudo[244637]: pam_unix(sudo:session): session closed for user root
Dec 09 16:22:55 compute-0 sudo[244662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:22:55 compute-0 sudo[244662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:22:55 compute-0 podman[244699]: 2025-12-09 16:22:55.93464221 +0000 UTC m=+0.044094268 container create 4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 09 16:22:55 compute-0 systemd[1]: Started libpod-conmon-4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90.scope.
Dec 09 16:22:55 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:22:56 compute-0 podman[244699]: 2025-12-09 16:22:55.912022295 +0000 UTC m=+0.021474353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:22:56 compute-0 podman[244699]: 2025-12-09 16:22:56.012760677 +0000 UTC m=+0.122212725 container init 4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:22:56 compute-0 podman[244699]: 2025-12-09 16:22:56.021986381 +0000 UTC m=+0.131438449 container start 4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:22:56 compute-0 podman[244699]: 2025-12-09 16:22:56.026417897 +0000 UTC m=+0.135869985 container attach 4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:22:56 compute-0 cranky_fermat[244715]: 167 167
Dec 09 16:22:56 compute-0 systemd[1]: libpod-4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90.scope: Deactivated successfully.
Dec 09 16:22:56 compute-0 conmon[244715]: conmon 4db6b20134363ed21792 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90.scope/container/memory.events
Dec 09 16:22:56 compute-0 podman[244699]: 2025-12-09 16:22:56.0296846 +0000 UTC m=+0.139136628 container died 4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:22:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f509efa365548eaf4ec8d0228e0fa62532ea15f17b63c440be4ca89156b172f8-merged.mount: Deactivated successfully.
Dec 09 16:22:56 compute-0 podman[244699]: 2025-12-09 16:22:56.069639289 +0000 UTC m=+0.179091317 container remove 4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:22:56 compute-0 systemd[1]: libpod-conmon-4db6b20134363ed217925e23aeebca9be915bf5bf80c51d63fbc2a39681efc90.scope: Deactivated successfully.
Dec 09 16:22:56 compute-0 podman[244739]: 2025-12-09 16:22:56.244163296 +0000 UTC m=+0.061219087 container create ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_einstein, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:22:56 compute-0 systemd[1]: Started libpod-conmon-ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c.scope.
Dec 09 16:22:56 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:22:56 compute-0 podman[244739]: 2025-12-09 16:22:56.214544241 +0000 UTC m=+0.031600082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d2355b5ea250dcf9c0b664af29e02bf89853a928575f8d56215ae0e65f91e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d2355b5ea250dcf9c0b664af29e02bf89853a928575f8d56215ae0e65f91e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d2355b5ea250dcf9c0b664af29e02bf89853a928575f8d56215ae0e65f91e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d2355b5ea250dcf9c0b664af29e02bf89853a928575f8d56215ae0e65f91e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d2355b5ea250dcf9c0b664af29e02bf89853a928575f8d56215ae0e65f91e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:56 compute-0 podman[244739]: 2025-12-09 16:22:56.320852263 +0000 UTC m=+0.137908014 container init ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_einstein, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 16:22:56 compute-0 podman[244739]: 2025-12-09 16:22:56.33301769 +0000 UTC m=+0.150073441 container start ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 16:22:56 compute-0 podman[244739]: 2025-12-09 16:22:56.336576041 +0000 UTC m=+0.153631832 container attach ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:22:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:22:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:22:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:22:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:22:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:22:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:22:56 compute-0 loving_einstein[244755]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:22:56 compute-0 loving_einstein[244755]: --> All data devices are unavailable
Dec 09 16:22:56 compute-0 systemd[1]: libpod-ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c.scope: Deactivated successfully.
Dec 09 16:22:56 compute-0 podman[244739]: 2025-12-09 16:22:56.82787032 +0000 UTC m=+0.644926071 container died ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:22:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-21d2355b5ea250dcf9c0b664af29e02bf89853a928575f8d56215ae0e65f91e9-merged.mount: Deactivated successfully.
Dec 09 16:22:56 compute-0 podman[244739]: 2025-12-09 16:22:56.880768248 +0000 UTC m=+0.697824009 container remove ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:22:56 compute-0 systemd[1]: libpod-conmon-ed2c71b7f79990406a540167ef5fe100c5dbf220348b20bdb4c53a7b7aaa864c.scope: Deactivated successfully.
Dec 09 16:22:56 compute-0 sudo[244662]: pam_unix(sudo:session): session closed for user root
Dec 09 16:22:56 compute-0 sudo[244786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:22:56 compute-0 sudo[244786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:22:56 compute-0 sudo[244786]: pam_unix(sudo:session): session closed for user root
Dec 09 16:22:57 compute-0 sudo[244811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:22:57 compute-0 sudo[244811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:22:57 compute-0 podman[244847]: 2025-12-09 16:22:57.353305162 +0000 UTC m=+0.058790337 container create 12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:22:57 compute-0 systemd[1]: Started libpod-conmon-12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28.scope.
Dec 09 16:22:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:22:57 compute-0 podman[244847]: 2025-12-09 16:22:57.332058146 +0000 UTC m=+0.037543341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:22:57 compute-0 podman[244847]: 2025-12-09 16:22:57.438173262 +0000 UTC m=+0.143658527 container init 12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:22:57 compute-0 podman[244847]: 2025-12-09 16:22:57.45210837 +0000 UTC m=+0.157593585 container start 12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:22:57 compute-0 podman[244847]: 2025-12-09 16:22:57.457088082 +0000 UTC m=+0.162573367 container attach 12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:22:57 compute-0 tender_driscoll[244864]: 167 167
Dec 09 16:22:57 compute-0 systemd[1]: libpod-12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28.scope: Deactivated successfully.
Dec 09 16:22:57 compute-0 podman[244847]: 2025-12-09 16:22:57.459437459 +0000 UTC m=+0.164922674 container died 12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebfb65a57578db69d8c0e0d26fcf7d3e5876890a04ecde0056d320bcf0dfe4c5-merged.mount: Deactivated successfully.
Dec 09 16:22:57 compute-0 podman[244847]: 2025-12-09 16:22:57.500911031 +0000 UTC m=+0.206396206 container remove 12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:22:57 compute-0 systemd[1]: libpod-conmon-12a08f36a873be1e41c352fd2088db302dc6d58ac6b183d269943767346b8e28.scope: Deactivated successfully.
Dec 09 16:22:57 compute-0 ceph-mon[75222]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:57 compute-0 podman[244887]: 2025-12-09 16:22:57.69023686 +0000 UTC m=+0.044395137 container create 078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_maxwell, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:22:57 compute-0 systemd[1]: Started libpod-conmon-078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e.scope.
Dec 09 16:22:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:22:57 compute-0 podman[244887]: 2025-12-09 16:22:57.671484205 +0000 UTC m=+0.025642512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e2cb53c2902e2fa1f9680c16e075845ea4b975fe11a4331c33c33614d71be6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e2cb53c2902e2fa1f9680c16e075845ea4b975fe11a4331c33c33614d71be6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e2cb53c2902e2fa1f9680c16e075845ea4b975fe11a4331c33c33614d71be6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e2cb53c2902e2fa1f9680c16e075845ea4b975fe11a4331c33c33614d71be6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:57 compute-0 podman[244887]: 2025-12-09 16:22:57.780983217 +0000 UTC m=+0.135141534 container init 078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:22:57 compute-0 podman[244887]: 2025-12-09 16:22:57.787949546 +0000 UTC m=+0.142107863 container start 078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:22:57 compute-0 podman[244887]: 2025-12-09 16:22:57.791805326 +0000 UTC m=+0.145963603 container attach 078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]: {
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:     "0": [
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:         {
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "devices": [
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "/dev/loop3"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             ],
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_name": "ceph_lv0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_size": "21470642176",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "name": "ceph_lv0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "tags": {
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cluster_name": "ceph",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.crush_device_class": "",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.encrypted": "0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.objectstore": "bluestore",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osd_id": "0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.type": "block",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.vdo": "0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.with_tpm": "0"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             },
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "type": "block",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "vg_name": "ceph_vg0"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:         }
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:     ],
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:     "1": [
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:         {
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "devices": [
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "/dev/loop4"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             ],
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_name": "ceph_lv1",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_size": "21470642176",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "name": "ceph_lv1",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "tags": {
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cluster_name": "ceph",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.crush_device_class": "",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.encrypted": "0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.objectstore": "bluestore",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osd_id": "1",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.type": "block",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.vdo": "0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.with_tpm": "0"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             },
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "type": "block",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "vg_name": "ceph_vg1"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:         }
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:     ],
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:     "2": [
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:         {
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "devices": [
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "/dev/loop5"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             ],
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_name": "ceph_lv2",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_size": "21470642176",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "name": "ceph_lv2",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "tags": {
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.cluster_name": "ceph",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.crush_device_class": "",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.encrypted": "0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.objectstore": "bluestore",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osd_id": "2",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.type": "block",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.vdo": "0",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:                 "ceph.with_tpm": "0"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             },
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "type": "block",
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:             "vg_name": "ceph_vg2"
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:         }
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]:     ]
Dec 09 16:22:58 compute-0 trusting_maxwell[244903]: }
Dec 09 16:22:58 compute-0 systemd[1]: libpod-078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e.scope: Deactivated successfully.
Dec 09 16:22:58 compute-0 podman[244912]: 2025-12-09 16:22:58.156942588 +0000 UTC m=+0.029999627 container died 078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8e2cb53c2902e2fa1f9680c16e075845ea4b975fe11a4331c33c33614d71be6-merged.mount: Deactivated successfully.
Dec 09 16:22:58 compute-0 podman[244912]: 2025-12-09 16:22:58.197858114 +0000 UTC m=+0.070915123 container remove 078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:22:58 compute-0 systemd[1]: libpod-conmon-078791d75ddc2f65a3ad95c8fe7b9ee5b482290f2392a40d316b53d76fca1f4e.scope: Deactivated successfully.
Dec 09 16:22:58 compute-0 sudo[244811]: pam_unix(sudo:session): session closed for user root
Dec 09 16:22:58 compute-0 sudo[244927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:22:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:58 compute-0 sudo[244927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:22:58 compute-0 sudo[244927]: pam_unix(sudo:session): session closed for user root
Dec 09 16:22:58 compute-0 sudo[244952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:22:58 compute-0 sudo[244952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:22:58 compute-0 podman[244988]: 2025-12-09 16:22:58.698844779 +0000 UTC m=+0.036024628 container create 6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bouman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:22:58 compute-0 systemd[1]: Started libpod-conmon-6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699.scope.
Dec 09 16:22:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:22:58 compute-0 podman[244988]: 2025-12-09 16:22:58.76478935 +0000 UTC m=+0.101969229 container init 6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:22:58 compute-0 podman[244988]: 2025-12-09 16:22:58.770204674 +0000 UTC m=+0.107384523 container start 6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bouman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:22:58 compute-0 podman[244988]: 2025-12-09 16:22:58.773460487 +0000 UTC m=+0.110640386 container attach 6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bouman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 16:22:58 compute-0 eager_bouman[245004]: 167 167
Dec 09 16:22:58 compute-0 systemd[1]: libpod-6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699.scope: Deactivated successfully.
Dec 09 16:22:58 compute-0 podman[244988]: 2025-12-09 16:22:58.775142555 +0000 UTC m=+0.112322404 container died 6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:22:58 compute-0 podman[244988]: 2025-12-09 16:22:58.683115461 +0000 UTC m=+0.020295330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-721aee8339c0524de564eeeacebba432749c44b4f429c48894ad2fa8cd122c36-merged.mount: Deactivated successfully.
Dec 09 16:22:58 compute-0 podman[244988]: 2025-12-09 16:22:58.810061191 +0000 UTC m=+0.147241070 container remove 6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 09 16:22:58 compute-0 systemd[1]: libpod-conmon-6bf1b39eefecdf64b9ee204968c936689d011418937ba7c110cdd905bd4ff699.scope: Deactivated successfully.
Dec 09 16:22:58 compute-0 podman[245029]: 2025-12-09 16:22:58.976510927 +0000 UTC m=+0.041234717 container create dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_noyce, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 09 16:22:59 compute-0 systemd[1]: Started libpod-conmon-dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24.scope.
Dec 09 16:22:59 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b28be38f9ab15a40139caedd373008f9c2cb5086b34c685fa1a75f95cbb014c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b28be38f9ab15a40139caedd373008f9c2cb5086b34c685fa1a75f95cbb014c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b28be38f9ab15a40139caedd373008f9c2cb5086b34c685fa1a75f95cbb014c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b28be38f9ab15a40139caedd373008f9c2cb5086b34c685fa1a75f95cbb014c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:22:59 compute-0 podman[245029]: 2025-12-09 16:22:59.051476945 +0000 UTC m=+0.116200745 container init dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_noyce, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:22:59 compute-0 podman[245029]: 2025-12-09 16:22:58.957986729 +0000 UTC m=+0.022710539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:22:59 compute-0 podman[245029]: 2025-12-09 16:22:59.05939174 +0000 UTC m=+0.124115520 container start dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_noyce, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 09 16:22:59 compute-0 podman[245029]: 2025-12-09 16:22:59.063333933 +0000 UTC m=+0.128057743 container attach dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:22:59 compute-0 ceph-mon[75222]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:22:59 compute-0 lvm[245124]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:22:59 compute-0 lvm[245124]: VG ceph_vg0 finished
Dec 09 16:22:59 compute-0 lvm[245125]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:22:59 compute-0 lvm[245125]: VG ceph_vg1 finished
Dec 09 16:22:59 compute-0 lvm[245127]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:22:59 compute-0 lvm[245127]: VG ceph_vg2 finished
Dec 09 16:22:59 compute-0 suspicious_noyce[245046]: {}
Dec 09 16:22:59 compute-0 systemd[1]: libpod-dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24.scope: Deactivated successfully.
Dec 09 16:22:59 compute-0 systemd[1]: libpod-dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24.scope: Consumed 1.405s CPU time.
Dec 09 16:22:59 compute-0 podman[245029]: 2025-12-09 16:22:59.910700754 +0000 UTC m=+0.975424584 container died dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b28be38f9ab15a40139caedd373008f9c2cb5086b34c685fa1a75f95cbb014c-merged.mount: Deactivated successfully.
Dec 09 16:22:59 compute-0 podman[245029]: 2025-12-09 16:22:59.961693558 +0000 UTC m=+1.026417348 container remove dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:22:59 compute-0 systemd[1]: libpod-conmon-dc807fe07a58f855908fcea906cb3baf8a83ca95bbb2e6153624f0b1f103ec24.scope: Deactivated successfully.
Dec 09 16:23:00 compute-0 sudo[244952]: pam_unix(sudo:session): session closed for user root
Dec 09 16:23:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:23:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:23:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:23:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:23:00 compute-0 sudo[245141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:23:00 compute-0 sudo[245141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:23:00 compute-0 sudo[245141]: pam_unix(sudo:session): session closed for user root
Dec 09 16:23:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:23:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:23:01 compute-0 ceph-mon[75222]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:03 compute-0 ceph-mon[75222]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:23:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3401 writes, 15K keys, 3401 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3401 writes, 3401 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1290 writes, 5854 keys, 1290 commit groups, 1.0 writes per commit group, ingest: 8.64 MB, 0.01 MB/s
                                           Interval WAL: 1290 writes, 1290 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    121.7      0.13              0.05         7    0.019       0      0       0.0       0.0
                                             L6      1/0    7.43 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    164.6    136.0      0.31              0.14         6    0.052     24K   3188       0.0       0.0
                                            Sum      1/0    7.43 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    115.1    131.7      0.45              0.19        13    0.034     24K   3188       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    133.5    135.7      0.27              0.11         8    0.033     17K   2463       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    164.6    136.0      0.31              0.14         6    0.052     24K   3188       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.1      0.13              0.05         6    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ad05ef58d0#2 capacity: 308.00 MB usage: 1.96 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(108,1.73 MB,0.562514%) FilterBlock(14,75.73 KB,0.0240128%) IndexBlock(14,153.42 KB,0.0486448%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 09 16:23:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 09 16:23:04 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1957185819' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 09 16:23:04 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14346 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 09 16:23:04 compute-0 ceph-mgr[75515]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 09 16:23:04 compute-0 ceph-mgr[75515]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 09 16:23:04 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1957185819' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 09 16:23:05 compute-0 ceph-mon[75222]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:05 compute-0 ceph-mon[75222]: from='client.14346 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 09 16:23:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:07 compute-0 ceph-mon[75222]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:07 compute-0 podman[245166]: 2025-12-09 16:23:07.543838896 +0000 UTC m=+0.075208805 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:23:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:09 compute-0 ceph-mon[75222]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:11 compute-0 ceph-mon[75222]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:13 compute-0 ceph-mon[75222]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:14 compute-0 ceph-mon[75222]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:17 compute-0 ceph-mon[75222]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:23:17.841 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:23:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:23:17.842 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:23:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:23:17.842 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:23:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:19 compute-0 ceph-mon[75222]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:21 compute-0 ceph-mon[75222]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:22 compute-0 podman[245188]: 2025-12-09 16:23:22.609524901 +0000 UTC m=+0.054423953 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 09 16:23:22 compute-0 podman[245187]: 2025-12-09 16:23:22.638629661 +0000 UTC m=+0.082941476 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:23:23 compute-0 ceph-mon[75222]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:24 compute-0 sshd-session[245229]: Invalid user odoo from 146.190.31.45 port 49614
Dec 09 16:23:24 compute-0 sshd-session[245229]: Connection closed by invalid user odoo 146.190.31.45 port 49614 [preauth]
Dec 09 16:23:25 compute-0 ceph-mon[75222]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:23:25
Dec 09 16:23:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:23:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:23:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root']
Dec 09 16:23:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:23:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:23:27 compute-0 ceph-mon[75222]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:29 compute-0 ceph-mon[75222]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:31 compute-0 ceph-mon[75222]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:33 compute-0 ceph-mon[75222]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:35 compute-0 ceph-mon[75222]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:23:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:23:37 compute-0 ceph-mon[75222]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:38 compute-0 podman[245231]: 2025-12-09 16:23:38.652143084 +0000 UTC m=+0.089750609 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:23:39 compute-0 ceph-mon[75222]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:41 compute-0 ceph-mon[75222]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:43 compute-0 ceph-mon[75222]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:45 compute-0 ceph-mon[75222]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:47 compute-0 ceph-mon[75222]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:49 compute-0 ceph-mon[75222]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:51 compute-0 ceph-mon[75222]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:53 compute-0 ceph-mon[75222]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:53 compute-0 podman[245254]: 2025-12-09 16:23:53.611247948 +0000 UTC m=+0.050218143 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 09 16:23:53 compute-0 podman[245253]: 2025-12-09 16:23:53.675316985 +0000 UTC m=+0.116604816 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.218 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.218 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.266 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.266 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.266 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.288 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.288 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.288 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.289 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.289 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.289 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.290 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.290 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.290 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.315 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.316 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.316 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.316 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.316 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:23:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:23:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3723522233' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.844 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.975 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.976 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5137MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.976 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:23:54 compute-0 nova_compute[243452]: 2025-12-09 16:23:54.977 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:23:55 compute-0 nova_compute[243452]: 2025-12-09 16:23:55.045 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:23:55 compute-0 nova_compute[243452]: 2025-12-09 16:23:55.045 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:23:55 compute-0 nova_compute[243452]: 2025-12-09 16:23:55.064 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:23:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:23:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:23:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273589137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:23:55 compute-0 nova_compute[243452]: 2025-12-09 16:23:55.590 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:23:55 compute-0 nova_compute[243452]: 2025-12-09 16:23:55.594 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:23:55 compute-0 ceph-mon[75222]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:55 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3723522233' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:23:55 compute-0 nova_compute[243452]: 2025-12-09 16:23:55.663 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:23:55 compute-0 nova_compute[243452]: 2025-12-09 16:23:55.665 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:23:55 compute-0 nova_compute[243452]: 2025-12-09 16:23:55.665 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:23:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:23:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:23:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:23:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:23:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:23:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:23:56 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4273589137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:23:57 compute-0 ceph-mon[75222]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:23:59 compute-0 ceph-mon[75222]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:00 compute-0 sudo[245342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:24:00 compute-0 sudo[245342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:00 compute-0 sudo[245342]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:00 compute-0 sudo[245367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 09 16:24:00 compute-0 sudo[245367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:00 compute-0 sudo[245367]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:24:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:24:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:00 compute-0 sudo[245411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:24:00 compute-0 sudo[245411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:00 compute-0 sudo[245411]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:00 compute-0 sudo[245436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:24:00 compute-0 sudo[245436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:01 compute-0 sudo[245436]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:24:01 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:24:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:24:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:24:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:24:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:24:01 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:24:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:24:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:24:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:24:01 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:24:01 compute-0 sudo[245492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:24:01 compute-0 sudo[245492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:01 compute-0 sudo[245492]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:01 compute-0 sudo[245517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:24:01 compute-0 sudo[245517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:01 compute-0 ceph-mon[75222]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:24:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:24:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:24:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:24:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:24:01 compute-0 podman[245554]: 2025-12-09 16:24:01.676267027 +0000 UTC m=+0.047626589 container create de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:24:01 compute-0 systemd[1]: Started libpod-conmon-de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7.scope.
Dec 09 16:24:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:24:01 compute-0 podman[245554]: 2025-12-09 16:24:01.650526803 +0000 UTC m=+0.021886385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:24:01 compute-0 podman[245554]: 2025-12-09 16:24:01.760777877 +0000 UTC m=+0.132137459 container init de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_carson, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:24:01 compute-0 podman[245554]: 2025-12-09 16:24:01.767601891 +0000 UTC m=+0.138961463 container start de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:24:01 compute-0 podman[245554]: 2025-12-09 16:24:01.77141601 +0000 UTC m=+0.142775602 container attach de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 09 16:24:01 compute-0 musing_carson[245570]: 167 167
Dec 09 16:24:01 compute-0 systemd[1]: libpod-de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7.scope: Deactivated successfully.
Dec 09 16:24:01 compute-0 podman[245554]: 2025-12-09 16:24:01.774134397 +0000 UTC m=+0.145493969 container died de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_carson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:24:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0e552db1229f0678cdcec09457861f69bf62308fd28698f00aec90b3baf548-merged.mount: Deactivated successfully.
Dec 09 16:24:01 compute-0 podman[245554]: 2025-12-09 16:24:01.811466422 +0000 UTC m=+0.182825984 container remove de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_carson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:24:01 compute-0 systemd[1]: libpod-conmon-de80b12ad081ab56ad66ee8c0a8bfc813751915e736e9e9d04863eedbc7ee2a7.scope: Deactivated successfully.
Dec 09 16:24:02 compute-0 podman[245594]: 2025-12-09 16:24:02.003830447 +0000 UTC m=+0.055207625 container create 5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:24:02 compute-0 systemd[1]: Started libpod-conmon-5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0.scope.
Dec 09 16:24:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:24:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69556281d9948d6ab4bba23afbcdcef10e795c5affb2224838ed6e5669882c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69556281d9948d6ab4bba23afbcdcef10e795c5affb2224838ed6e5669882c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69556281d9948d6ab4bba23afbcdcef10e795c5affb2224838ed6e5669882c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69556281d9948d6ab4bba23afbcdcef10e795c5affb2224838ed6e5669882c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69556281d9948d6ab4bba23afbcdcef10e795c5affb2224838ed6e5669882c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:02 compute-0 podman[245594]: 2025-12-09 16:24:01.981084578 +0000 UTC m=+0.032461816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:24:02 compute-0 podman[245594]: 2025-12-09 16:24:02.079102924 +0000 UTC m=+0.130480142 container init 5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:24:02 compute-0 podman[245594]: 2025-12-09 16:24:02.086051632 +0000 UTC m=+0.137428830 container start 5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:24:02 compute-0 podman[245594]: 2025-12-09 16:24:02.089474379 +0000 UTC m=+0.140851587 container attach 5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:24:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:02 compute-0 hardcore_boyd[245612]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:24:02 compute-0 hardcore_boyd[245612]: --> All data devices are unavailable
Dec 09 16:24:02 compute-0 systemd[1]: libpod-5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0.scope: Deactivated successfully.
Dec 09 16:24:02 compute-0 podman[245594]: 2025-12-09 16:24:02.590421824 +0000 UTC m=+0.641799012 container died 5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a69556281d9948d6ab4bba23afbcdcef10e795c5affb2224838ed6e5669882c4-merged.mount: Deactivated successfully.
Dec 09 16:24:02 compute-0 podman[245594]: 2025-12-09 16:24:02.630328991 +0000 UTC m=+0.681706179 container remove 5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 09 16:24:02 compute-0 systemd[1]: libpod-conmon-5ed7ddc496f5a2df89e4e47d5b3ce0c5c706c92712ce8d72ec377626b8d98aa0.scope: Deactivated successfully.
Dec 09 16:24:02 compute-0 sudo[245517]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:02 compute-0 sudo[245644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:24:02 compute-0 sudo[245644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:02 compute-0 sudo[245644]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:02 compute-0 sudo[245669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:24:02 compute-0 sudo[245669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:03 compute-0 podman[245707]: 2025-12-09 16:24:03.0918161 +0000 UTC m=+0.047878006 container create b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:24:03 compute-0 systemd[1]: Started libpod-conmon-b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515.scope.
Dec 09 16:24:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:24:03 compute-0 podman[245707]: 2025-12-09 16:24:03.166193461 +0000 UTC m=+0.122255367 container init b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kapitsa, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:24:03 compute-0 podman[245707]: 2025-12-09 16:24:03.074050454 +0000 UTC m=+0.030112360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:24:03 compute-0 podman[245707]: 2025-12-09 16:24:03.17528452 +0000 UTC m=+0.131346406 container start b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:24:03 compute-0 podman[245707]: 2025-12-09 16:24:03.17912482 +0000 UTC m=+0.135186706 container attach b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:24:03 compute-0 distracted_kapitsa[245723]: 167 167
Dec 09 16:24:03 compute-0 systemd[1]: libpod-b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515.scope: Deactivated successfully.
Dec 09 16:24:03 compute-0 podman[245707]: 2025-12-09 16:24:03.182920338 +0000 UTC m=+0.138982224 container died b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kapitsa, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:24:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a25355d83559ce7e4340264423891c8b6a577298ae79974372e647a30a35de95-merged.mount: Deactivated successfully.
Dec 09 16:24:03 compute-0 podman[245707]: 2025-12-09 16:24:03.217582787 +0000 UTC m=+0.173644703 container remove b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:24:03 compute-0 systemd[1]: libpod-conmon-b83560c03a32d365de28d157fd5a2010c59dd22ffb76dce1ae9bb166912f0515.scope: Deactivated successfully.
Dec 09 16:24:03 compute-0 podman[245747]: 2025-12-09 16:24:03.404991821 +0000 UTC m=+0.045559931 container create 7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:24:03 compute-0 systemd[1]: Started libpod-conmon-7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95.scope.
Dec 09 16:24:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d607043aa7d0737f664d1363fc8f7501a79347d919063837098eaa31ae99108a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d607043aa7d0737f664d1363fc8f7501a79347d919063837098eaa31ae99108a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d607043aa7d0737f664d1363fc8f7501a79347d919063837098eaa31ae99108a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d607043aa7d0737f664d1363fc8f7501a79347d919063837098eaa31ae99108a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:03 compute-0 podman[245747]: 2025-12-09 16:24:03.383344253 +0000 UTC m=+0.023912443 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:24:03 compute-0 podman[245747]: 2025-12-09 16:24:03.48422101 +0000 UTC m=+0.124789200 container init 7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:24:03 compute-0 podman[245747]: 2025-12-09 16:24:03.493507565 +0000 UTC m=+0.134075655 container start 7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:24:03 compute-0 podman[245747]: 2025-12-09 16:24:03.497044985 +0000 UTC m=+0.137613165 container attach 7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:24:03 compute-0 ceph-mon[75222]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:03 compute-0 confident_rosalind[245764]: {
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:     "0": [
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:         {
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "devices": [
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "/dev/loop3"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             ],
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_name": "ceph_lv0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_size": "21470642176",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "name": "ceph_lv0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "tags": {
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cluster_name": "ceph",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.crush_device_class": "",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.encrypted": "0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.objectstore": "bluestore",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osd_id": "0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.type": "block",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.vdo": "0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.with_tpm": "0"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             },
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "type": "block",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "vg_name": "ceph_vg0"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:         }
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:     ],
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:     "1": [
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:         {
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "devices": [
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "/dev/loop4"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             ],
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_name": "ceph_lv1",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_size": "21470642176",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "name": "ceph_lv1",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "tags": {
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cluster_name": "ceph",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.crush_device_class": "",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.encrypted": "0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.objectstore": "bluestore",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osd_id": "1",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.type": "block",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.vdo": "0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.with_tpm": "0"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             },
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "type": "block",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "vg_name": "ceph_vg1"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:         }
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:     ],
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:     "2": [
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:         {
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "devices": [
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "/dev/loop5"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             ],
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_name": "ceph_lv2",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_size": "21470642176",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "name": "ceph_lv2",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "tags": {
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.cluster_name": "ceph",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.crush_device_class": "",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.encrypted": "0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.objectstore": "bluestore",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osd_id": "2",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.type": "block",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.vdo": "0",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:                 "ceph.with_tpm": "0"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             },
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "type": "block",
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:             "vg_name": "ceph_vg2"
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:         }
Dec 09 16:24:03 compute-0 confident_rosalind[245764]:     ]
Dec 09 16:24:03 compute-0 confident_rosalind[245764]: }
Dec 09 16:24:03 compute-0 systemd[1]: libpod-7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95.scope: Deactivated successfully.
Dec 09 16:24:03 compute-0 podman[245747]: 2025-12-09 16:24:03.812438248 +0000 UTC m=+0.453006418 container died 7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:24:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-d607043aa7d0737f664d1363fc8f7501a79347d919063837098eaa31ae99108a-merged.mount: Deactivated successfully.
Dec 09 16:24:03 compute-0 podman[245747]: 2025-12-09 16:24:03.856451763 +0000 UTC m=+0.497019863 container remove 7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:24:03 compute-0 systemd[1]: libpod-conmon-7a2e432d623d22a1dcf53a24b7023196b36936c86746e958d22993586ed31f95.scope: Deactivated successfully.
Dec 09 16:24:03 compute-0 sudo[245669]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:03 compute-0 sudo[245783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:24:03 compute-0 sudo[245783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:03 compute-0 sudo[245783]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:04 compute-0 sudo[245808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:24:04 compute-0 sudo[245808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:04 compute-0 podman[245846]: 2025-12-09 16:24:04.328647947 +0000 UTC m=+0.044980733 container create 02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:24:04 compute-0 systemd[1]: Started libpod-conmon-02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e.scope.
Dec 09 16:24:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:04 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:24:04 compute-0 podman[245846]: 2025-12-09 16:24:04.30910307 +0000 UTC m=+0.025435906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:24:04 compute-0 podman[245846]: 2025-12-09 16:24:04.41993729 +0000 UTC m=+0.136270126 container init 02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_greider, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:24:04 compute-0 podman[245846]: 2025-12-09 16:24:04.43150545 +0000 UTC m=+0.147838196 container start 02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:24:04 compute-0 podman[245846]: 2025-12-09 16:24:04.435325459 +0000 UTC m=+0.151658245 container attach 02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:24:04 compute-0 cool_greider[245862]: 167 167
Dec 09 16:24:04 compute-0 systemd[1]: libpod-02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e.scope: Deactivated successfully.
Dec 09 16:24:04 compute-0 podman[245846]: 2025-12-09 16:24:04.438119869 +0000 UTC m=+0.154452615 container died 02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b149229ca5d895466455d580f36c633ac5a25bc94521f0068d18e139f4cba5f-merged.mount: Deactivated successfully.
Dec 09 16:24:04 compute-0 podman[245846]: 2025-12-09 16:24:04.479289223 +0000 UTC m=+0.195621969 container remove 02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_greider, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:24:04 compute-0 systemd[1]: libpod-conmon-02a91f6286d5a31a06a164a1e2d9efc026826f450813f3bedf574936107dd81e.scope: Deactivated successfully.
Dec 09 16:24:04 compute-0 podman[245887]: 2025-12-09 16:24:04.628559879 +0000 UTC m=+0.042006639 container create aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:24:04 compute-0 systemd[1]: Started libpod-conmon-aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75.scope.
Dec 09 16:24:04 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6baca3d168db196513337cd16d67b9b7d5ab70a36aa87f91b04492ae1d16c9fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6baca3d168db196513337cd16d67b9b7d5ab70a36aa87f91b04492ae1d16c9fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6baca3d168db196513337cd16d67b9b7d5ab70a36aa87f91b04492ae1d16c9fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6baca3d168db196513337cd16d67b9b7d5ab70a36aa87f91b04492ae1d16c9fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:24:04 compute-0 podman[245887]: 2025-12-09 16:24:04.609227988 +0000 UTC m=+0.022674828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:24:04 compute-0 podman[245887]: 2025-12-09 16:24:04.704085713 +0000 UTC m=+0.117532473 container init aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 16:24:04 compute-0 podman[245887]: 2025-12-09 16:24:04.713158621 +0000 UTC m=+0.126605381 container start aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:24:04 compute-0 podman[245887]: 2025-12-09 16:24:04.716570749 +0000 UTC m=+0.130017529 container attach aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec 09 16:24:05 compute-0 lvm[245981]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:24:05 compute-0 lvm[245982]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:24:05 compute-0 lvm[245981]: VG ceph_vg0 finished
Dec 09 16:24:05 compute-0 lvm[245982]: VG ceph_vg1 finished
Dec 09 16:24:05 compute-0 lvm[245984]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:24:05 compute-0 lvm[245984]: VG ceph_vg2 finished
Dec 09 16:24:05 compute-0 unruffled_rubin[245903]: {}
Dec 09 16:24:05 compute-0 systemd[1]: libpod-aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75.scope: Deactivated successfully.
Dec 09 16:24:05 compute-0 systemd[1]: libpod-aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75.scope: Consumed 1.291s CPU time.
Dec 09 16:24:05 compute-0 podman[245887]: 2025-12-09 16:24:05.534619695 +0000 UTC m=+0.948066465 container died aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 09 16:24:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6baca3d168db196513337cd16d67b9b7d5ab70a36aa87f91b04492ae1d16c9fa-merged.mount: Deactivated successfully.
Dec 09 16:24:05 compute-0 podman[245887]: 2025-12-09 16:24:05.57970453 +0000 UTC m=+0.993151290 container remove aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:24:05 compute-0 systemd[1]: libpod-conmon-aa1b11e4913fb72a262a8c7a62fc7aff77fda25ef9c27079390d895ccf4bcf75.scope: Deactivated successfully.
Dec 09 16:24:05 compute-0 ceph-mon[75222]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:05 compute-0 sudo[245808]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:24:05 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:24:05 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:05 compute-0 sudo[245998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:24:05 compute-0 sudo[245998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:24:05 compute-0 sudo[245998]: pam_unix(sudo:session): session closed for user root
Dec 09 16:24:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:06 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:24:07 compute-0 sshd-session[246023]: Invalid user test from 146.190.31.45 port 49088
Dec 09 16:24:07 compute-0 sshd-session[246023]: Connection closed by invalid user test 146.190.31.45 port 49088 [preauth]
Dec 09 16:24:07 compute-0 ceph-mon[75222]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:09 compute-0 podman[246025]: 2025-12-09 16:24:09.634330716 +0000 UTC m=+0.077169882 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:24:09 compute-0 ceph-mon[75222]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:24:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385269117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:24:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:24:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385269117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:24:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2385269117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:24:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2385269117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:24:11 compute-0 ceph-mon[75222]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:24:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5886 writes, 24K keys, 5886 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5886 writes, 1030 syncs, 5.71 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:24:13 compute-0 ceph-mon[75222]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:15 compute-0 ceph-mon[75222]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:24:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7131 writes, 29K keys, 7131 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7131 writes, 1433 syncs, 4.98 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:24:17 compute-0 ceph-mon[75222]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:24:17.841 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:24:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:24:17.842 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:24:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:24:17.842 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:24:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:19 compute-0 ceph-mon[75222]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:21 compute-0 ceph-mon[75222]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:24:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5728 writes, 24K keys, 5728 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5728 writes, 934 syncs, 6.13 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:24:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:23 compute-0 ceph-mon[75222]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:24 compute-0 ceph-mgr[75515]: [devicehealth INFO root] Check health
Dec 09 16:24:24 compute-0 podman[246048]: 2025-12-09 16:24:24.653178202 +0000 UTC m=+0.081718251 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 09 16:24:24 compute-0 podman[246047]: 2025-12-09 16:24:24.74076047 +0000 UTC m=+0.173309833 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec 09 16:24:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:25 compute-0 ceph-mon[75222]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:24:25
Dec 09 16:24:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:24:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:24:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Dec 09 16:24:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:24:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:24:26 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:24:26.868 155091 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:96:e5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:69:89:7d:fc:2e'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 16:24:26 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:24:26.868 155091 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 16:24:26 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:24:26.870 155091 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=037f0e18-4bfd-4487-a7a8-05ae973391a9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 16:24:27 compute-0 ceph-mon[75222]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:29 compute-0 ceph-mon[75222]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:31 compute-0 ceph-mon[75222]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:33 compute-0 ceph-mon[75222]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:35 compute-0 ceph-mon[75222]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:24:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:24:37 compute-0 ceph-mon[75222]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:39 compute-0 ceph-mon[75222]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:40 compute-0 podman[246091]: 2025-12-09 16:24:40.64582077 +0000 UTC m=+0.082465282 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:24:41 compute-0 ceph-mon[75222]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:43 compute-0 ceph-mon[75222]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:45 compute-0 ceph-mon[75222]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:47 compute-0 ceph-mon[75222]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.473527) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297488473584, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1242, "num_deletes": 251, "total_data_size": 1921938, "memory_usage": 1953744, "flush_reason": "Manual Compaction"}
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297488487070, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1882070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15109, "largest_seqno": 16350, "table_properties": {"data_size": 1876194, "index_size": 3207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12351, "raw_average_key_size": 19, "raw_value_size": 1864378, "raw_average_value_size": 2959, "num_data_blocks": 147, "num_entries": 630, "num_filter_entries": 630, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765297365, "oldest_key_time": 1765297365, "file_creation_time": 1765297488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 13589 microseconds, and 4314 cpu microseconds.
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.487122) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1882070 bytes OK
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.487143) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.488880) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.488895) EVENT_LOG_v1 {"time_micros": 1765297488488891, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.488914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1916305, prev total WAL file size 1916305, number of live WAL files 2.
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.489616) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1837KB)], [35(7609KB)]
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297488489672, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9673740, "oldest_snapshot_seqno": -1}
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4017 keys, 7863337 bytes, temperature: kUnknown
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297488543355, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7863337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7834203, "index_size": 18016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 98170, "raw_average_key_size": 24, "raw_value_size": 7759199, "raw_average_value_size": 1931, "num_data_blocks": 760, "num_entries": 4017, "num_filter_entries": 4017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765297488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.544104) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7863337 bytes
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.545407) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.1 rd, 145.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.4 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(9.3) write-amplify(4.2) OK, records in: 4531, records dropped: 514 output_compression: NoCompression
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.545451) EVENT_LOG_v1 {"time_micros": 1765297488545433, "job": 16, "event": "compaction_finished", "compaction_time_micros": 54025, "compaction_time_cpu_micros": 15859, "output_level": 6, "num_output_files": 1, "total_output_size": 7863337, "num_input_records": 4531, "num_output_records": 4017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297488546370, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297488549397, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.489501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.549520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.549526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.549528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.549531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:24:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:24:48.549533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:24:49 compute-0 sshd-session[246111]: Invalid user test from 146.190.31.45 port 39360
Dec 09 16:24:49 compute-0 sshd-session[246111]: Connection closed by invalid user test 146.190.31.45 port 39360 [preauth]
Dec 09 16:24:49 compute-0 ceph-mon[75222]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:51 compute-0 ceph-mon[75222]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:53 compute-0 ceph-mon[75222]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:55 compute-0 ceph-mon[75222]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:24:55 compute-0 podman[246114]: 2025-12-09 16:24:55.607841042 +0000 UTC m=+0.048852722 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:24:55 compute-0 podman[246113]: 2025-12-09 16:24:55.65946112 +0000 UTC m=+0.103936518 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.667 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.667 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.667 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.668 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.682 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.682 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.682 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.683 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.683 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.683 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.683 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.707 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.707 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.707 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.707 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:24:55 compute-0 nova_compute[243452]: 2025-12-09 16:24:55.707 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:24:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:24:56 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881715799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:24:56 compute-0 nova_compute[243452]: 2025-12-09 16:24:56.233 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:24:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:56 compute-0 nova_compute[243452]: 2025-12-09 16:24:56.425 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:24:56 compute-0 nova_compute[243452]: 2025-12-09 16:24:56.426 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:24:56 compute-0 nova_compute[243452]: 2025-12-09 16:24:56.427 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:24:56 compute-0 nova_compute[243452]: 2025-12-09 16:24:56.427 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:24:56 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2881715799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:24:56 compute-0 nova_compute[243452]: 2025-12-09 16:24:56.515 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:24:56 compute-0 nova_compute[243452]: 2025-12-09 16:24:56.516 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:24:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:24:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:24:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:24:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:24:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:24:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:24:56 compute-0 nova_compute[243452]: 2025-12-09 16:24:56.533 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:24:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:24:57 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481363827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:24:57 compute-0 nova_compute[243452]: 2025-12-09 16:24:57.133 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:24:57 compute-0 nova_compute[243452]: 2025-12-09 16:24:57.139 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:24:57 compute-0 nova_compute[243452]: 2025-12-09 16:24:57.159 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:24:57 compute-0 nova_compute[243452]: 2025-12-09 16:24:57.160 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:24:57 compute-0 nova_compute[243452]: 2025-12-09 16:24:57.161 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:24:57 compute-0 nova_compute[243452]: 2025-12-09 16:24:57.532 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:57 compute-0 nova_compute[243452]: 2025-12-09 16:24:57.533 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:24:57 compute-0 ceph-mon[75222]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:57 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1481363827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:24:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:24:59 compute-0 ceph-mon[75222]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:01 compute-0 ceph-mon[75222]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:03 compute-0 ceph-mon[75222]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:05 compute-0 sudo[246200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:25:05 compute-0 sudo[246200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:05 compute-0 sudo[246200]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:05 compute-0 sudo[246225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:25:05 compute-0 sudo[246225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:05 compute-0 ceph-mon[75222]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:06 compute-0 sudo[246225]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:25:06 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:25:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:25:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:25:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:25:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:25:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:25:06 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:25:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:25:06 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:25:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:25:06 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:25:06 compute-0 sudo[246280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:25:06 compute-0 sudo[246280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:06 compute-0 sudo[246280]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:06 compute-0 sudo[246305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:25:06 compute-0 sudo[246305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:07 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:25:07 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:25:07 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:25:07 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:25:07 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:25:07 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:25:07 compute-0 podman[246342]: 2025-12-09 16:25:07.27860544 +0000 UTC m=+0.088318787 container create 42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shockley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:25:07 compute-0 podman[246342]: 2025-12-09 16:25:07.219957323 +0000 UTC m=+0.029670720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:25:07 compute-0 systemd[1]: Started libpod-conmon-42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51.scope.
Dec 09 16:25:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:25:07 compute-0 podman[246342]: 2025-12-09 16:25:07.432469888 +0000 UTC m=+0.242183225 container init 42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:25:07 compute-0 podman[246342]: 2025-12-09 16:25:07.443552321 +0000 UTC m=+0.253265668 container start 42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 09 16:25:07 compute-0 trusting_shockley[246358]: 167 167
Dec 09 16:25:07 compute-0 systemd[1]: libpod-42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51.scope: Deactivated successfully.
Dec 09 16:25:07 compute-0 conmon[246358]: conmon 42bf96e747f0c08eb41a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51.scope/container/memory.events
Dec 09 16:25:07 compute-0 podman[246342]: 2025-12-09 16:25:07.449355045 +0000 UTC m=+0.259068392 container attach 42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 16:25:07 compute-0 podman[246342]: 2025-12-09 16:25:07.450528068 +0000 UTC m=+0.260241475 container died 42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-99f98583341af84013dd0f3c1744f6c05c625819fdd92555578eea7ebfbaaf20-merged.mount: Deactivated successfully.
Dec 09 16:25:07 compute-0 podman[246342]: 2025-12-09 16:25:07.504239826 +0000 UTC m=+0.313953133 container remove 42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:25:07 compute-0 systemd[1]: libpod-conmon-42bf96e747f0c08eb41ae7e3e55b21f6517898904bb7b254250bb76cae3adc51.scope: Deactivated successfully.
Dec 09 16:25:07 compute-0 podman[246380]: 2025-12-09 16:25:07.674356334 +0000 UTC m=+0.041096403 container create 3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:25:07 compute-0 systemd[1]: Started libpod-conmon-3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e.scope.
Dec 09 16:25:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9de5d7bc605d743a32ccafa39ffc1e123e97e606c32ade9d7a607b8c88a3d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9de5d7bc605d743a32ccafa39ffc1e123e97e606c32ade9d7a607b8c88a3d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9de5d7bc605d743a32ccafa39ffc1e123e97e606c32ade9d7a607b8c88a3d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9de5d7bc605d743a32ccafa39ffc1e123e97e606c32ade9d7a607b8c88a3d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9de5d7bc605d743a32ccafa39ffc1e123e97e606c32ade9d7a607b8c88a3d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:07 compute-0 podman[246380]: 2025-12-09 16:25:07.657224209 +0000 UTC m=+0.023964298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:25:07 compute-0 podman[246380]: 2025-12-09 16:25:07.755801435 +0000 UTC m=+0.122541504 container init 3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:25:07 compute-0 podman[246380]: 2025-12-09 16:25:07.768404811 +0000 UTC m=+0.135144880 container start 3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:25:07 compute-0 podman[246380]: 2025-12-09 16:25:07.773759813 +0000 UTC m=+0.140499912 container attach 3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:25:08 compute-0 ceph-mon[75222]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:08 compute-0 youthful_carson[246397]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:25:08 compute-0 youthful_carson[246397]: --> All data devices are unavailable
Dec 09 16:25:08 compute-0 systemd[1]: libpod-3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e.scope: Deactivated successfully.
Dec 09 16:25:08 compute-0 podman[246380]: 2025-12-09 16:25:08.273606327 +0000 UTC m=+0.640346396 container died 3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 16:25:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc9de5d7bc605d743a32ccafa39ffc1e123e97e606c32ade9d7a607b8c88a3d5-merged.mount: Deactivated successfully.
Dec 09 16:25:08 compute-0 podman[246380]: 2025-12-09 16:25:08.780674557 +0000 UTC m=+1.147414666 container remove 3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:25:08 compute-0 systemd[1]: libpod-conmon-3d11bd89c0eda216a7abe22ce9b811382ad60e023b1c4f0fda03da615ebfc22e.scope: Deactivated successfully.
Dec 09 16:25:08 compute-0 sudo[246305]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:08 compute-0 sudo[246431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:25:08 compute-0 sudo[246431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:08 compute-0 sudo[246431]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:08 compute-0 sudo[246456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:25:08 compute-0 sudo[246456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:09 compute-0 ceph-mon[75222]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:09 compute-0 podman[246492]: 2025-12-09 16:25:09.237288611 +0000 UTC m=+0.023998580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:25:09 compute-0 podman[246492]: 2025-12-09 16:25:09.343980846 +0000 UTC m=+0.130690825 container create e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:25:09 compute-0 systemd[1]: Started libpod-conmon-e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb.scope.
Dec 09 16:25:09 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:25:09 compute-0 podman[246492]: 2025-12-09 16:25:09.558291072 +0000 UTC m=+0.345001061 container init e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:25:09 compute-0 podman[246492]: 2025-12-09 16:25:09.565120465 +0000 UTC m=+0.351830404 container start e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_swirles, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:25:09 compute-0 quirky_swirles[246508]: 167 167
Dec 09 16:25:09 compute-0 systemd[1]: libpod-e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb.scope: Deactivated successfully.
Dec 09 16:25:09 compute-0 podman[246492]: 2025-12-09 16:25:09.616273971 +0000 UTC m=+0.402983920 container attach e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:25:09 compute-0 podman[246492]: 2025-12-09 16:25:09.616751664 +0000 UTC m=+0.403461613 container died e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-df521a4a8db306a5d86c84c82ddbf25d8541a0946a3e73f289133a5951dfcc16-merged.mount: Deactivated successfully.
Dec 09 16:25:09 compute-0 podman[246492]: 2025-12-09 16:25:09.665174912 +0000 UTC m=+0.451884891 container remove e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_swirles, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:25:09 compute-0 systemd[1]: libpod-conmon-e038e2abfcd056a8fd34afa67d0305db61a9cf32794bd1a6e1805c12a7946bbb.scope: Deactivated successfully.
Dec 09 16:25:09 compute-0 podman[246534]: 2025-12-09 16:25:09.848100772 +0000 UTC m=+0.044501109 container create 02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_antonelli, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:25:09 compute-0 systemd[1]: Started libpod-conmon-02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c.scope.
Dec 09 16:25:09 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f368a0fe591b987dff531438c3068701a2a294bae40af7213feb7aa19831f3e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f368a0fe591b987dff531438c3068701a2a294bae40af7213feb7aa19831f3e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f368a0fe591b987dff531438c3068701a2a294bae40af7213feb7aa19831f3e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f368a0fe591b987dff531438c3068701a2a294bae40af7213feb7aa19831f3e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:09 compute-0 podman[246534]: 2025-12-09 16:25:09.825544534 +0000 UTC m=+0.021944881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:25:09 compute-0 podman[246534]: 2025-12-09 16:25:09.937400975 +0000 UTC m=+0.133801342 container init 02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_antonelli, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:25:09 compute-0 podman[246534]: 2025-12-09 16:25:09.95134931 +0000 UTC m=+0.147749647 container start 02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:25:09 compute-0 podman[246534]: 2025-12-09 16:25:09.984749373 +0000 UTC m=+0.181149740 container attach 02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_antonelli, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:25:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:25:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276892446' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:25:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:25:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276892446' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:25:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/276892446' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:25:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/276892446' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]: {
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:     "0": [
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:         {
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "devices": [
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "/dev/loop3"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             ],
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_name": "ceph_lv0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_size": "21470642176",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "name": "ceph_lv0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "tags": {
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cluster_name": "ceph",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.crush_device_class": "",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.encrypted": "0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.objectstore": "bluestore",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osd_id": "0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.type": "block",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.vdo": "0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.with_tpm": "0"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             },
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "type": "block",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "vg_name": "ceph_vg0"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:         }
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:     ],
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:     "1": [
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:         {
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "devices": [
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "/dev/loop4"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             ],
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_name": "ceph_lv1",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_size": "21470642176",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "name": "ceph_lv1",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "tags": {
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cluster_name": "ceph",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.crush_device_class": "",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.encrypted": "0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.objectstore": "bluestore",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osd_id": "1",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.type": "block",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.vdo": "0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.with_tpm": "0"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             },
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "type": "block",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "vg_name": "ceph_vg1"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:         }
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:     ],
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:     "2": [
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:         {
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "devices": [
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "/dev/loop5"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             ],
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_name": "ceph_lv2",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_size": "21470642176",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "name": "ceph_lv2",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "tags": {
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.cluster_name": "ceph",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.crush_device_class": "",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.encrypted": "0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.objectstore": "bluestore",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osd_id": "2",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.type": "block",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.vdo": "0",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:                 "ceph.with_tpm": "0"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             },
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "type": "block",
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:             "vg_name": "ceph_vg2"
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:         }
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]:     ]
Dec 09 16:25:10 compute-0 affectionate_antonelli[246550]: }
Dec 09 16:25:10 compute-0 systemd[1]: libpod-02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c.scope: Deactivated successfully.
Dec 09 16:25:10 compute-0 podman[246534]: 2025-12-09 16:25:10.27088805 +0000 UTC m=+0.467288397 container died 02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f368a0fe591b987dff531438c3068701a2a294bae40af7213feb7aa19831f3e0-merged.mount: Deactivated successfully.
Dec 09 16:25:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:10 compute-0 podman[246534]: 2025-12-09 16:25:10.410484855 +0000 UTC m=+0.606885192 container remove 02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_antonelli, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:25:10 compute-0 systemd[1]: libpod-conmon-02b6cfa31ad9b871363212a7438a0d417d1e8186e65d27c9780a5944d966e05c.scope: Deactivated successfully.
Dec 09 16:25:10 compute-0 sudo[246456]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:10 compute-0 sudo[246572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:25:10 compute-0 sudo[246572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:10 compute-0 sudo[246572]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:10 compute-0 sudo[246597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:25:10 compute-0 sudo[246597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:10 compute-0 podman[246634]: 2025-12-09 16:25:10.868393195 +0000 UTC m=+0.051481266 container create d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:25:10 compute-0 systemd[1]: Started libpod-conmon-d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd.scope.
Dec 09 16:25:10 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:25:10 compute-0 podman[246634]: 2025-12-09 16:25:10.947345506 +0000 UTC m=+0.130433637 container init d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_swanson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:25:10 compute-0 podman[246634]: 2025-12-09 16:25:10.851492117 +0000 UTC m=+0.034580208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:25:10 compute-0 podman[246634]: 2025-12-09 16:25:10.95348259 +0000 UTC m=+0.136570701 container start d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:25:10 compute-0 podman[246634]: 2025-12-09 16:25:10.957568275 +0000 UTC m=+0.140656376 container attach d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_swanson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:25:10 compute-0 systemd[1]: libpod-d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd.scope: Deactivated successfully.
Dec 09 16:25:10 compute-0 conmon[246651]: conmon d81b0c5426080a2dc6e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd.scope/container/memory.events
Dec 09 16:25:10 compute-0 upbeat_swanson[246651]: 167 167
Dec 09 16:25:10 compute-0 podman[246634]: 2025-12-09 16:25:10.961262169 +0000 UTC m=+0.144350260 container died d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_swanson, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:25:10 compute-0 podman[246648]: 2025-12-09 16:25:10.979904916 +0000 UTC m=+0.075263878 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 09 16:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b8a35971bb1e46c5b8e7a30cfbd2b86832a2d58cb127a5f6528a316f20738dc-merged.mount: Deactivated successfully.
Dec 09 16:25:10 compute-0 podman[246634]: 2025-12-09 16:25:10.997216285 +0000 UTC m=+0.180304356 container remove d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec 09 16:25:11 compute-0 systemd[1]: libpod-conmon-d81b0c5426080a2dc6e6c7fc298c1c0e73ae531c4dd7bd652477cc08761605bd.scope: Deactivated successfully.
Dec 09 16:25:11 compute-0 podman[246690]: 2025-12-09 16:25:11.175886715 +0000 UTC m=+0.051847077 container create 7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haibt, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:25:11 compute-0 systemd[1]: Started libpod-conmon-7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b.scope.
Dec 09 16:25:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:25:11 compute-0 ceph-mon[75222]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae2da485f190068717b5feb5db74db58bd1fe29902d7fb2e4aeaca6f33f7ee3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae2da485f190068717b5feb5db74db58bd1fe29902d7fb2e4aeaca6f33f7ee3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae2da485f190068717b5feb5db74db58bd1fe29902d7fb2e4aeaca6f33f7ee3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae2da485f190068717b5feb5db74db58bd1fe29902d7fb2e4aeaca6f33f7ee3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:25:11 compute-0 podman[246690]: 2025-12-09 16:25:11.15412263 +0000 UTC m=+0.030083002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:25:11 compute-0 podman[246690]: 2025-12-09 16:25:11.252351075 +0000 UTC m=+0.128311457 container init 7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:25:11 compute-0 podman[246690]: 2025-12-09 16:25:11.263924762 +0000 UTC m=+0.139885114 container start 7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haibt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 09 16:25:11 compute-0 podman[246690]: 2025-12-09 16:25:11.26739374 +0000 UTC m=+0.143354192 container attach 7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:25:11 compute-0 lvm[246784]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:25:11 compute-0 lvm[246785]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:25:11 compute-0 lvm[246784]: VG ceph_vg0 finished
Dec 09 16:25:11 compute-0 lvm[246785]: VG ceph_vg1 finished
Dec 09 16:25:11 compute-0 lvm[246787]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:25:11 compute-0 lvm[246787]: VG ceph_vg2 finished
Dec 09 16:25:12 compute-0 zen_haibt[246706]: {}
Dec 09 16:25:12 compute-0 systemd[1]: libpod-7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b.scope: Deactivated successfully.
Dec 09 16:25:12 compute-0 podman[246690]: 2025-12-09 16:25:12.034556749 +0000 UTC m=+0.910517101 container died 7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:25:12 compute-0 systemd[1]: libpod-7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b.scope: Consumed 1.265s CPU time.
Dec 09 16:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-cae2da485f190068717b5feb5db74db58bd1fe29902d7fb2e4aeaca6f33f7ee3-merged.mount: Deactivated successfully.
Dec 09 16:25:12 compute-0 podman[246690]: 2025-12-09 16:25:12.078364737 +0000 UTC m=+0.954325089 container remove 7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haibt, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:25:12 compute-0 systemd[1]: libpod-conmon-7f09f0ef02b2400fab5d2e2cbcf7f687c0f26ec4bad4be72fffb7ad6e58c160b.scope: Deactivated successfully.
Dec 09 16:25:12 compute-0 sudo[246597]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:25:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:25:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:25:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:25:12 compute-0 sudo[246803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:25:12 compute-0 sudo[246803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:25:12 compute-0 sudo[246803]: pam_unix(sudo:session): session closed for user root
Dec 09 16:25:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:25:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:25:13 compute-0 ceph-mon[75222]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:15 compute-0 ceph-mon[75222]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:17 compute-0 ceph-mon[75222]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:25:17.843 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:25:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:25:17.844 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:25:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:25:17.844 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:25:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 09 16:25:19 compute-0 ceph-mon[75222]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 09 16:25:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 09 16:25:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:21 compute-0 ceph-mon[75222]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 09 16:25:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 09 16:25:23 compute-0 ceph-mon[75222]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 09 16:25:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:25:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:25 compute-0 ceph-mon[75222]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:25:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:25:25
Dec 09 16:25:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:25:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:25:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Dec 09 16:25:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:25:26 compute-0 podman[246829]: 2025-12-09 16:25:26.647016549 +0000 UTC m=+0.084378526 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 16:25:26 compute-0 podman[246828]: 2025-12-09 16:25:26.654753187 +0000 UTC m=+0.093003319 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:25:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:25:27 compute-0 ceph-mon[75222]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:25:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:25:29 compute-0 ceph-mon[75222]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 09 16:25:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 09 16:25:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:31 compute-0 ceph-mon[75222]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 09 16:25:31 compute-0 sshd-session[246875]: Invalid user test from 146.190.31.45 port 47796
Dec 09 16:25:31 compute-0 sshd-session[246875]: Connection closed by invalid user test 146.190.31.45 port 47796 [preauth]
Dec 09 16:25:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Dec 09 16:25:33 compute-0 ceph-mon[75222]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Dec 09 16:25:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Dec 09 16:25:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:35 compute-0 ceph-mon[75222]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:25:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:25:37 compute-0 ceph-mon[75222]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:39 compute-0 ceph-mon[75222]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:41 compute-0 ceph-mon[75222]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:41 compute-0 podman[246877]: 2025-12-09 16:25:41.632545238 +0000 UTC m=+0.070420191 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 16:25:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:43 compute-0 ceph-mon[75222]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:45 compute-0 ceph-mon[75222]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:47 compute-0 ceph-mon[75222]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:49 compute-0 ceph-mon[75222]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:51 compute-0 ceph-mon[75222]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.081 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.081 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.081 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.081 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.081 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:25:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:25:53 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570656750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.606 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.748 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.749 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.750 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.750 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.868 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.868 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:25:53 compute-0 nova_compute[243452]: 2025-12-09 16:25:53.899 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:25:54 compute-0 ceph-mon[75222]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:54 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3570656750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:25:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:25:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950528543' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:25:54 compute-0 nova_compute[243452]: 2025-12-09 16:25:54.690 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.791s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:25:54 compute-0 nova_compute[243452]: 2025-12-09 16:25:54.698 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:25:54 compute-0 nova_compute[243452]: 2025-12-09 16:25:54.716 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:25:54 compute-0 nova_compute[243452]: 2025-12-09 16:25:54.719 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:25:54 compute-0 nova_compute[243452]: 2025-12-09 16:25:54.719 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:25:55 compute-0 ceph-mon[75222]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:55 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3950528543' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:25:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:25:55 compute-0 nova_compute[243452]: 2025-12-09 16:25:55.712 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:55 compute-0 nova_compute[243452]: 2025-12-09 16:25:55.713 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:55 compute-0 nova_compute[243452]: 2025-12-09 16:25:55.732 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:55 compute-0 nova_compute[243452]: 2025-12-09 16:25:55.732 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:55 compute-0 nova_compute[243452]: 2025-12-09 16:25:55.732 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:55 compute-0 nova_compute[243452]: 2025-12-09 16:25:55.733 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:25:56 compute-0 nova_compute[243452]: 2025-12-09 16:25:56.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:56 compute-0 nova_compute[243452]: 2025-12-09 16:25:56.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:25:56 compute-0 nova_compute[243452]: 2025-12-09 16:25:56.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:25:56 compute-0 nova_compute[243452]: 2025-12-09 16:25:56.201 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:25:56 compute-0 nova_compute[243452]: 2025-12-09 16:25:56.201 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:25:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:25:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:25:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:25:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:25:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:25:57 compute-0 ceph-mon[75222]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:57 compute-0 podman[246942]: 2025-12-09 16:25:57.610533397 +0000 UTC m=+0.050119978 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 09 16:25:57 compute-0 podman[246941]: 2025-12-09 16:25:57.692614806 +0000 UTC m=+0.137726603 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:25:58 compute-0 nova_compute[243452]: 2025-12-09 16:25:58.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:25:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:25:59 compute-0 ceph-mon[75222]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:01 compute-0 ceph-mon[75222]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:03 compute-0 ceph-mon[75222]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:05 compute-0 ceph-mon[75222]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:07 compute-0 ceph-mon[75222]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:09 compute-0 ceph-mon[75222]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:26:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276403540' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:26:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:26:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276403540' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:26:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1276403540' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:26:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1276403540' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:26:11 compute-0 ceph-mon[75222]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:12 compute-0 sudo[246983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:26:12 compute-0 sudo[246983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:12 compute-0 sudo[246983]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:12 compute-0 sudo[247014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:26:12 compute-0 sudo[247014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:12 compute-0 podman[247007]: 2025-12-09 16:26:12.398274717 +0000 UTC m=+0.082682638 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 09 16:26:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:12 compute-0 sudo[247014]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 09 16:26:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 09 16:26:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:26:12 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:26:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:26:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:26:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:26:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:26:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:26:12 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:26:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:26:12 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:26:12 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:26:12 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:26:13 compute-0 sudo[247084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:26:13 compute-0 sudo[247084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:13 compute-0 sudo[247084]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:13 compute-0 sudo[247109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:26:13 compute-0 sudo[247109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:13 compute-0 podman[247146]: 2025-12-09 16:26:13.442485404 +0000 UTC m=+0.050818987 container create 67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:26:13 compute-0 systemd[1]: Started libpod-conmon-67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab.scope.
Dec 09 16:26:13 compute-0 podman[247146]: 2025-12-09 16:26:13.423970741 +0000 UTC m=+0.032304344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:26:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:26:13 compute-0 podman[247146]: 2025-12-09 16:26:13.543065367 +0000 UTC m=+0.151399040 container init 67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:26:13 compute-0 podman[247146]: 2025-12-09 16:26:13.551968109 +0000 UTC m=+0.160301692 container start 67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:26:13 compute-0 podman[247146]: 2025-12-09 16:26:13.555968052 +0000 UTC m=+0.164301655 container attach 67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 09 16:26:13 compute-0 magical_wilson[247162]: 167 167
Dec 09 16:26:13 compute-0 systemd[1]: libpod-67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab.scope: Deactivated successfully.
Dec 09 16:26:13 compute-0 podman[247146]: 2025-12-09 16:26:13.561458347 +0000 UTC m=+0.169791930 container died 67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d1bca4a5649ef979c76432e572872c79fe43e70a609e0e3dc0cbd769c828e8-merged.mount: Deactivated successfully.
Dec 09 16:26:13 compute-0 podman[247146]: 2025-12-09 16:26:13.612830919 +0000 UTC m=+0.221164502 container remove 67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:26:13 compute-0 systemd[1]: libpod-conmon-67b577b5bf381d263464448cc56c67b98905f8eeb96bf94a6f9b2cbf7c9921ab.scope: Deactivated successfully.
Dec 09 16:26:13 compute-0 ceph-mon[75222]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 09 16:26:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:26:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:26:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:26:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:26:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:26:13 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:26:13 compute-0 podman[247188]: 2025-12-09 16:26:13.802277182 +0000 UTC m=+0.045169957 container create ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:26:13 compute-0 systemd[1]: Started libpod-conmon-ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a.scope.
Dec 09 16:26:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e09eda48a0f8cb78924df04cd372c7cf6884b2bd13144cd7ff06283630ed9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:13 compute-0 podman[247188]: 2025-12-09 16:26:13.783485891 +0000 UTC m=+0.026378686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e09eda48a0f8cb78924df04cd372c7cf6884b2bd13144cd7ff06283630ed9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e09eda48a0f8cb78924df04cd372c7cf6884b2bd13144cd7ff06283630ed9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e09eda48a0f8cb78924df04cd372c7cf6884b2bd13144cd7ff06283630ed9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e09eda48a0f8cb78924df04cd372c7cf6884b2bd13144cd7ff06283630ed9b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:13 compute-0 podman[247188]: 2025-12-09 16:26:13.889414095 +0000 UTC m=+0.132306960 container init ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:26:13 compute-0 podman[247188]: 2025-12-09 16:26:13.897885014 +0000 UTC m=+0.140777809 container start ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:26:13 compute-0 podman[247188]: 2025-12-09 16:26:13.901348342 +0000 UTC m=+0.144241137 container attach ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:26:14 compute-0 focused_bardeen[247205]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:26:14 compute-0 focused_bardeen[247205]: --> All data devices are unavailable
Dec 09 16:26:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:14 compute-0 systemd[1]: libpod-ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a.scope: Deactivated successfully.
Dec 09 16:26:14 compute-0 podman[247225]: 2025-12-09 16:26:14.499434524 +0000 UTC m=+0.026281864 container died ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-80e09eda48a0f8cb78924df04cd372c7cf6884b2bd13144cd7ff06283630ed9b-merged.mount: Deactivated successfully.
Dec 09 16:26:14 compute-0 podman[247225]: 2025-12-09 16:26:14.546409551 +0000 UTC m=+0.073256821 container remove ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 09 16:26:14 compute-0 systemd[1]: libpod-conmon-ad8365d4ea91460c9e0c59c55f65ffc21d14a852bc9968eac25a87cf31b8376a.scope: Deactivated successfully.
Dec 09 16:26:14 compute-0 sudo[247109]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:14 compute-0 sudo[247241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:26:14 compute-0 sudo[247241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:14 compute-0 sudo[247241]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:14 compute-0 sudo[247266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:26:14 compute-0 sudo[247266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:15 compute-0 podman[247303]: 2025-12-09 16:26:15.076364878 +0000 UTC m=+0.059492122 container create fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:26:15 compute-0 systemd[1]: Started libpod-conmon-fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb.scope.
Dec 09 16:26:15 compute-0 podman[247303]: 2025-12-09 16:26:15.053890413 +0000 UTC m=+0.037017747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:26:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:26:15 compute-0 podman[247303]: 2025-12-09 16:26:15.166169706 +0000 UTC m=+0.149297010 container init fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:26:15 compute-0 podman[247303]: 2025-12-09 16:26:15.172997499 +0000 UTC m=+0.156124733 container start fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_brahmagupta, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:26:15 compute-0 serene_brahmagupta[247320]: 167 167
Dec 09 16:26:15 compute-0 podman[247303]: 2025-12-09 16:26:15.177262059 +0000 UTC m=+0.160389393 container attach fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:26:15 compute-0 systemd[1]: libpod-fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb.scope: Deactivated successfully.
Dec 09 16:26:15 compute-0 podman[247303]: 2025-12-09 16:26:15.178806573 +0000 UTC m=+0.161933817 container died fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1779763a0d4c6e0ff5ea6ae3049d45c2d4e99789e492c0b227b4b017c32c7080-merged.mount: Deactivated successfully.
Dec 09 16:26:15 compute-0 podman[247303]: 2025-12-09 16:26:15.218595457 +0000 UTC m=+0.201722731 container remove fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:26:15 compute-0 systemd[1]: libpod-conmon-fbc23f70a13227cb89d6a0e9247d8cb58be688c63652a6748e43cf31d5ca42fb.scope: Deactivated successfully.
Dec 09 16:26:15 compute-0 podman[247346]: 2025-12-09 16:26:15.415135361 +0000 UTC m=+0.038616882 container create fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:26:15 compute-0 systemd[1]: Started libpod-conmon-fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d.scope.
Dec 09 16:26:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd61f00821eb67ae66a4f21140e72672218e0e78682b237a0ac7cb49b20768b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd61f00821eb67ae66a4f21140e72672218e0e78682b237a0ac7cb49b20768b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd61f00821eb67ae66a4f21140e72672218e0e78682b237a0ac7cb49b20768b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd61f00821eb67ae66a4f21140e72672218e0e78682b237a0ac7cb49b20768b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:15 compute-0 podman[247346]: 2025-12-09 16:26:15.489620586 +0000 UTC m=+0.113102127 container init fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 09 16:26:15 compute-0 sshd-session[247323]: Invalid user test from 146.190.31.45 port 44772
Dec 09 16:26:15 compute-0 podman[247346]: 2025-12-09 16:26:15.39455265 +0000 UTC m=+0.018034181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:26:15 compute-0 podman[247346]: 2025-12-09 16:26:15.498337243 +0000 UTC m=+0.121818754 container start fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:26:15 compute-0 podman[247346]: 2025-12-09 16:26:15.501413379 +0000 UTC m=+0.124894900 container attach fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:26:15 compute-0 sshd-session[247323]: Connection closed by invalid user test 146.190.31.45 port 44772 [preauth]
Dec 09 16:26:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:15 compute-0 ceph-mon[75222]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:15 compute-0 nervous_jones[247362]: {
Dec 09 16:26:15 compute-0 nervous_jones[247362]:     "0": [
Dec 09 16:26:15 compute-0 nervous_jones[247362]:         {
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "devices": [
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "/dev/loop3"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             ],
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_name": "ceph_lv0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_size": "21470642176",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "name": "ceph_lv0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "tags": {
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cluster_name": "ceph",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.crush_device_class": "",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.encrypted": "0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.objectstore": "bluestore",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osd_id": "0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.type": "block",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.vdo": "0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.with_tpm": "0"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             },
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "type": "block",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "vg_name": "ceph_vg0"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:         }
Dec 09 16:26:15 compute-0 nervous_jones[247362]:     ],
Dec 09 16:26:15 compute-0 nervous_jones[247362]:     "1": [
Dec 09 16:26:15 compute-0 nervous_jones[247362]:         {
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "devices": [
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "/dev/loop4"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             ],
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_name": "ceph_lv1",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_size": "21470642176",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "name": "ceph_lv1",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "tags": {
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cluster_name": "ceph",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.crush_device_class": "",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.encrypted": "0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.objectstore": "bluestore",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osd_id": "1",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.type": "block",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.vdo": "0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.with_tpm": "0"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             },
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "type": "block",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "vg_name": "ceph_vg1"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:         }
Dec 09 16:26:15 compute-0 nervous_jones[247362]:     ],
Dec 09 16:26:15 compute-0 nervous_jones[247362]:     "2": [
Dec 09 16:26:15 compute-0 nervous_jones[247362]:         {
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "devices": [
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "/dev/loop5"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             ],
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_name": "ceph_lv2",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_size": "21470642176",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "name": "ceph_lv2",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "tags": {
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.cluster_name": "ceph",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.crush_device_class": "",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.encrypted": "0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.objectstore": "bluestore",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osd_id": "2",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.type": "block",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.vdo": "0",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:                 "ceph.with_tpm": "0"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             },
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "type": "block",
Dec 09 16:26:15 compute-0 nervous_jones[247362]:             "vg_name": "ceph_vg2"
Dec 09 16:26:15 compute-0 nervous_jones[247362]:         }
Dec 09 16:26:15 compute-0 nervous_jones[247362]:     ]
Dec 09 16:26:15 compute-0 nervous_jones[247362]: }
Dec 09 16:26:15 compute-0 systemd[1]: libpod-fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d.scope: Deactivated successfully.
Dec 09 16:26:15 compute-0 podman[247346]: 2025-12-09 16:26:15.789908742 +0000 UTC m=+0.413390253 container died fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cd61f00821eb67ae66a4f21140e72672218e0e78682b237a0ac7cb49b20768b-merged.mount: Deactivated successfully.
Dec 09 16:26:15 compute-0 podman[247346]: 2025-12-09 16:26:15.831434396 +0000 UTC m=+0.454915907 container remove fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:26:15 compute-0 systemd[1]: libpod-conmon-fc926203ba00956d9457e2ed5fe9be6ebeb6e65e0f1e283c7a4d498f560e905d.scope: Deactivated successfully.
Dec 09 16:26:15 compute-0 sudo[247266]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:15 compute-0 sudo[247382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:26:15 compute-0 sudo[247382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:15 compute-0 sudo[247382]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:16 compute-0 sudo[247407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:26:16 compute-0 sudo[247407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:16 compute-0 podman[247444]: 2025-12-09 16:26:16.358361145 +0000 UTC m=+0.052142364 container create 7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_perlman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:26:16 compute-0 systemd[1]: Started libpod-conmon-7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600.scope.
Dec 09 16:26:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:26:16 compute-0 podman[247444]: 2025-12-09 16:26:16.423255129 +0000 UTC m=+0.117036368 container init 7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:26:16 compute-0 podman[247444]: 2025-12-09 16:26:16.332393512 +0000 UTC m=+0.026174751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:26:16 compute-0 podman[247444]: 2025-12-09 16:26:16.42893568 +0000 UTC m=+0.122716889 container start 7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:26:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:16 compute-0 compassionate_perlman[247461]: 167 167
Dec 09 16:26:16 compute-0 podman[247444]: 2025-12-09 16:26:16.432746838 +0000 UTC m=+0.126528047 container attach 7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:26:16 compute-0 podman[247444]: 2025-12-09 16:26:16.434073415 +0000 UTC m=+0.127854644 container died 7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_perlman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:26:16 compute-0 systemd[1]: libpod-7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600.scope: Deactivated successfully.
Dec 09 16:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbe37af3c3425edda073ad61b03e4adbbb652edb5a4053fb55209bd5107db238-merged.mount: Deactivated successfully.
Dec 09 16:26:16 compute-0 podman[247444]: 2025-12-09 16:26:16.467623233 +0000 UTC m=+0.161404452 container remove 7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:26:16 compute-0 systemd[1]: libpod-conmon-7bd2d723c336cd09802c4c0e36f85965f477b64c519c34a03465dae689a3c600.scope: Deactivated successfully.
Dec 09 16:26:16 compute-0 podman[247483]: 2025-12-09 16:26:16.627166762 +0000 UTC m=+0.039747524 container create 487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:26:16 compute-0 systemd[1]: Started libpod-conmon-487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d.scope.
Dec 09 16:26:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23b04c9a94808237f07b3d277c55abf05cb93c0712415640c7d9ec071cdc419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23b04c9a94808237f07b3d277c55abf05cb93c0712415640c7d9ec071cdc419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23b04c9a94808237f07b3d277c55abf05cb93c0712415640c7d9ec071cdc419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23b04c9a94808237f07b3d277c55abf05cb93c0712415640c7d9ec071cdc419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:26:16 compute-0 podman[247483]: 2025-12-09 16:26:16.703001355 +0000 UTC m=+0.115582127 container init 487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:26:16 compute-0 podman[247483]: 2025-12-09 16:26:16.610531832 +0000 UTC m=+0.023112604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:26:16 compute-0 podman[247483]: 2025-12-09 16:26:16.716020173 +0000 UTC m=+0.128600925 container start 487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 09 16:26:16 compute-0 podman[247483]: 2025-12-09 16:26:16.719483951 +0000 UTC m=+0.132064753 container attach 487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:26:17 compute-0 lvm[247578]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:26:17 compute-0 lvm[247578]: VG ceph_vg0 finished
Dec 09 16:26:17 compute-0 lvm[247579]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:26:17 compute-0 lvm[247579]: VG ceph_vg1 finished
Dec 09 16:26:17 compute-0 lvm[247581]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:26:17 compute-0 lvm[247581]: VG ceph_vg2 finished
Dec 09 16:26:17 compute-0 awesome_tharp[247499]: {}
Dec 09 16:26:17 compute-0 systemd[1]: libpod-487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d.scope: Deactivated successfully.
Dec 09 16:26:17 compute-0 systemd[1]: libpod-487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d.scope: Consumed 1.254s CPU time.
Dec 09 16:26:17 compute-0 podman[247584]: 2025-12-09 16:26:17.574184264 +0000 UTC m=+0.029344510 container died 487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_tharp, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f23b04c9a94808237f07b3d277c55abf05cb93c0712415640c7d9ec071cdc419-merged.mount: Deactivated successfully.
Dec 09 16:26:17 compute-0 podman[247584]: 2025-12-09 16:26:17.61650901 +0000 UTC m=+0.071669226 container remove 487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:26:17 compute-0 systemd[1]: libpod-conmon-487f950ca1e3706332cd1a7d97b43e3fe82002f1877df98a1fbe3e0d2a396b3d.scope: Deactivated successfully.
Dec 09 16:26:17 compute-0 sudo[247407]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:26:17 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:26:17 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:26:17 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:26:17 compute-0 sudo[247599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:26:17 compute-0 sudo[247599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:26:17 compute-0 sudo[247599]: pam_unix(sudo:session): session closed for user root
Dec 09 16:26:17 compute-0 ceph-mon[75222]: pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:26:17 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:26:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:26:17.844 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:26:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:26:17.845 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:26:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:26:17.846 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:26:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:19 compute-0 ceph-mon[75222]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:21 compute-0 ceph-mon[75222]: pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:23 compute-0 ceph-mon[75222]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:25 compute-0 ceph-mon[75222]: pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:26:25
Dec 09 16:26:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:26:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:26:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images', '.mgr']
Dec 09 16:26:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:26:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:26:27 compute-0 ceph-mon[75222]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:28 compute-0 podman[247625]: 2025-12-09 16:26:28.648623062 +0000 UTC m=+0.068616711 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 09 16:26:28 compute-0 podman[247624]: 2025-12-09 16:26:28.701557138 +0000 UTC m=+0.131427046 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 09 16:26:29 compute-0 ceph-mon[75222]: pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:31 compute-0 ceph-mon[75222]: pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:33 compute-0 ceph-mon[75222]: pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:35 compute-0 ceph-mon[75222]: pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:26:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:26:38 compute-0 ceph-mon[75222]: pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:39 compute-0 ceph-mon[75222]: pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:41 compute-0 ceph-mon[75222]: pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:42 compute-0 podman[247670]: 2025-12-09 16:26:42.613703275 +0000 UTC m=+0.056290172 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 09 16:26:43 compute-0 ceph-mon[75222]: pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:45 compute-0 ceph-mon[75222]: pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:47 compute-0 ceph-mon[75222]: pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:49 compute-0 ceph-mon[75222]: pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:51 compute-0 ceph-mon[75222]: pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:52 compute-0 nova_compute[243452]: 2025-12-09 16:26:52.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:52 compute-0 nova_compute[243452]: 2025-12-09 16:26:52.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 16:26:52 compute-0 nova_compute[243452]: 2025-12-09 16:26:52.072 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 16:26:52 compute-0 nova_compute[243452]: 2025-12-09 16:26:52.072 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:52 compute-0 nova_compute[243452]: 2025-12-09 16:26:52.073 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 16:26:52 compute-0 nova_compute[243452]: 2025-12-09 16:26:52.085 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.095 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.096 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.116 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.117 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.117 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.117 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.118 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:26:53 compute-0 ceph-mon[75222]: pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:26:53 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3965243548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.648 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.793 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.794 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.795 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.795 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.886 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.886 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:26:53 compute-0 nova_compute[243452]: 2025-12-09 16:26:53.921 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:26:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:26:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457437383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:26:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:54 compute-0 nova_compute[243452]: 2025-12-09 16:26:54.462 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:26:54 compute-0 nova_compute[243452]: 2025-12-09 16:26:54.468 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:26:54 compute-0 nova_compute[243452]: 2025-12-09 16:26:54.483 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:26:54 compute-0 nova_compute[243452]: 2025-12-09 16:26:54.485 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:26:54 compute-0 nova_compute[243452]: 2025-12-09 16:26:54.485 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:26:54 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3965243548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:26:54 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/457437383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:26:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:26:55 compute-0 ceph-mon[75222]: pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:26:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:26:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:26:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:26:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:26:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.445 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.446 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.446 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.447 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.472 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.473 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.474 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.474 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.475 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:57 compute-0 nova_compute[243452]: 2025-12-09 16:26:57.475 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:26:57 compute-0 ceph-mon[75222]: pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:59 compute-0 nova_compute[243452]: 2025-12-09 16:26:59.056 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:26:59 compute-0 ceph-mon[75222]: pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:26:59 compute-0 podman[247734]: 2025-12-09 16:26:59.629473652 +0000 UTC m=+0.071555921 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:26:59 compute-0 podman[247735]: 2025-12-09 16:26:59.632957341 +0000 UTC m=+0.058118982 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:27:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:00 compute-0 sshd-session[247780]: Invalid user test from 146.190.31.45 port 49296
Dec 09 16:27:01 compute-0 sshd-session[247780]: Connection closed by invalid user test 146.190.31.45 port 49296 [preauth]
Dec 09 16:27:01 compute-0 ceph-mon[75222]: pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:03 compute-0 ceph-mon[75222]: pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:05 compute-0 ceph-mon[75222]: pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:07 compute-0 ceph-mon[75222]: pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:09 compute-0 ceph-mon[75222]: pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:27:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3018459475' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:27:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:27:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3018459475' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:27:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3018459475' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:27:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3018459475' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:27:11 compute-0 ceph-mon[75222]: pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:13 compute-0 podman[247782]: 2025-12-09 16:27:13.643839735 +0000 UTC m=+0.077847789 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 09 16:27:13 compute-0 ceph-mon[75222]: pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:15 compute-0 ceph-mon[75222]: pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:17 compute-0 ceph-mon[75222]: pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:17 compute-0 sudo[247801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:27:17 compute-0 sudo[247801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:17 compute-0 sudo[247801]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:27:17.846 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:27:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:27:17.846 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:27:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:27:17.846 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:27:17 compute-0 sudo[247826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:27:17 compute-0 sudo[247826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:18 compute-0 podman[247893]: 2025-12-09 16:27:18.338936067 +0000 UTC m=+0.070831961 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:27:18 compute-0 podman[247893]: 2025-12-09 16:27:18.443836359 +0000 UTC m=+0.175732173 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:27:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:19 compute-0 sudo[247826]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:27:19 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:27:19 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:19 compute-0 sudo[248079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:27:19 compute-0 sudo[248079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:19 compute-0 sudo[248079]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:19 compute-0 sudo[248104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:27:19 compute-0 sudo[248104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:19 compute-0 ceph-mon[75222]: pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:19 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:20 compute-0 sudo[248104]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:27:20 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:27:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:27:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:27:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:27:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:27:20 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:27:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:27:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:27:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:27:20 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:27:20 compute-0 sudo[248161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:27:20 compute-0 sudo[248161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:20 compute-0 sudo[248161]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:20 compute-0 sudo[248186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:27:20 compute-0 sudo[248186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:20 compute-0 podman[248223]: 2025-12-09 16:27:20.45171109 +0000 UTC m=+0.048155001 container create 3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_williams, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:27:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:20 compute-0 systemd[1]: Started libpod-conmon-3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833.scope.
Dec 09 16:27:20 compute-0 podman[248223]: 2025-12-09 16:27:20.425553191 +0000 UTC m=+0.021997192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:27:20 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:27:20 compute-0 podman[248223]: 2025-12-09 16:27:20.541642778 +0000 UTC m=+0.138086699 container init 3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_williams, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:27:20 compute-0 podman[248223]: 2025-12-09 16:27:20.547451812 +0000 UTC m=+0.143895733 container start 3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_williams, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:27:20 compute-0 podman[248223]: 2025-12-09 16:27:20.551184827 +0000 UTC m=+0.147628728 container attach 3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:27:20 compute-0 recursing_williams[248239]: 167 167
Dec 09 16:27:20 compute-0 systemd[1]: libpod-3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833.scope: Deactivated successfully.
Dec 09 16:27:20 compute-0 podman[248223]: 2025-12-09 16:27:20.554176742 +0000 UTC m=+0.150620643 container died 3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_williams, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 16:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e1d54f93f82255394d5a121bb79741addf3261db04fd7f371e02ba52f8fa65-merged.mount: Deactivated successfully.
Dec 09 16:27:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:20 compute-0 podman[248223]: 2025-12-09 16:27:20.595383655 +0000 UTC m=+0.191827556 container remove 3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_williams, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:27:20 compute-0 systemd[1]: libpod-conmon-3ea4d85e561903c440ef123ce1a775893847f6bc4efae15d41724d2c37f11833.scope: Deactivated successfully.
Dec 09 16:27:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:27:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:27:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:27:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:27:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:27:20 compute-0 podman[248262]: 2025-12-09 16:27:20.783371923 +0000 UTC m=+0.045601298 container create 423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:27:20 compute-0 systemd[1]: Started libpod-conmon-423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2.scope.
Dec 09 16:27:20 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91dd8e8b1579d11dabeceebfa123d7da564a64cf5b247f7ec5501278c5dec934/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91dd8e8b1579d11dabeceebfa123d7da564a64cf5b247f7ec5501278c5dec934/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91dd8e8b1579d11dabeceebfa123d7da564a64cf5b247f7ec5501278c5dec934/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91dd8e8b1579d11dabeceebfa123d7da564a64cf5b247f7ec5501278c5dec934/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91dd8e8b1579d11dabeceebfa123d7da564a64cf5b247f7ec5501278c5dec934/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:20 compute-0 podman[248262]: 2025-12-09 16:27:20.767619628 +0000 UTC m=+0.029848993 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:27:20 compute-0 podman[248262]: 2025-12-09 16:27:20.88808142 +0000 UTC m=+0.150310875 container init 423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wilbur, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:27:20 compute-0 podman[248262]: 2025-12-09 16:27:20.902464846 +0000 UTC m=+0.164694241 container start 423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:27:20 compute-0 podman[248262]: 2025-12-09 16:27:20.907867728 +0000 UTC m=+0.170097103 container attach 423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 16:27:21 compute-0 kind_wilbur[248278]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:27:21 compute-0 kind_wilbur[248278]: --> All data devices are unavailable
Dec 09 16:27:21 compute-0 systemd[1]: libpod-423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2.scope: Deactivated successfully.
Dec 09 16:27:21 compute-0 podman[248262]: 2025-12-09 16:27:21.454472721 +0000 UTC m=+0.716702146 container died 423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-91dd8e8b1579d11dabeceebfa123d7da564a64cf5b247f7ec5501278c5dec934-merged.mount: Deactivated successfully.
Dec 09 16:27:21 compute-0 podman[248262]: 2025-12-09 16:27:21.510101952 +0000 UTC m=+0.772331327 container remove 423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:27:21 compute-0 systemd[1]: libpod-conmon-423b503e38fd239e914bf1f1988c1aa283950916a17685964d800c060d0222d2.scope: Deactivated successfully.
Dec 09 16:27:21 compute-0 sudo[248186]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:21 compute-0 sudo[248310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:27:21 compute-0 sudo[248310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:21 compute-0 sudo[248310]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:21 compute-0 sudo[248335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:27:21 compute-0 sudo[248335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:21 compute-0 ceph-mon[75222]: pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:22 compute-0 podman[248372]: 2025-12-09 16:27:22.00586609 +0000 UTC m=+0.035737190 container create aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:27:22 compute-0 systemd[1]: Started libpod-conmon-aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73.scope.
Dec 09 16:27:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:27:22 compute-0 podman[248372]: 2025-12-09 16:27:22.081609348 +0000 UTC m=+0.111480448 container init aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:27:22 compute-0 podman[248372]: 2025-12-09 16:27:21.989208339 +0000 UTC m=+0.019079449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:27:22 compute-0 podman[248372]: 2025-12-09 16:27:22.087537756 +0000 UTC m=+0.117408846 container start aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:27:22 compute-0 podman[248372]: 2025-12-09 16:27:22.091992901 +0000 UTC m=+0.121864021 container attach aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lovelace, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:27:22 compute-0 romantic_lovelace[248388]: 167 167
Dec 09 16:27:22 compute-0 systemd[1]: libpod-aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73.scope: Deactivated successfully.
Dec 09 16:27:22 compute-0 podman[248372]: 2025-12-09 16:27:22.094799851 +0000 UTC m=+0.124670981 container died aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-95051d19044cabcb5c2c623a7ef3fb6b09207637ee330f09971e2666fc1d084c-merged.mount: Deactivated successfully.
Dec 09 16:27:22 compute-0 podman[248372]: 2025-12-09 16:27:22.132513495 +0000 UTC m=+0.162384585 container remove aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lovelace, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:27:22 compute-0 systemd[1]: libpod-conmon-aa7f6db50cfbd45f78106472c3745f5cdbd7e72b7b0d82f5e03586af8ab19b73.scope: Deactivated successfully.
Dec 09 16:27:22 compute-0 podman[248411]: 2025-12-09 16:27:22.27930554 +0000 UTC m=+0.037310784 container create 5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:27:22 compute-0 systemd[1]: Started libpod-conmon-5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08.scope.
Dec 09 16:27:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebf64d8052db44ff6f87f0aaecf1f41751d379c4a161653fc2c37c14da49b43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebf64d8052db44ff6f87f0aaecf1f41751d379c4a161653fc2c37c14da49b43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebf64d8052db44ff6f87f0aaecf1f41751d379c4a161653fc2c37c14da49b43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebf64d8052db44ff6f87f0aaecf1f41751d379c4a161653fc2c37c14da49b43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:22 compute-0 podman[248411]: 2025-12-09 16:27:22.348907905 +0000 UTC m=+0.106913149 container init 5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:27:22 compute-0 podman[248411]: 2025-12-09 16:27:22.356394277 +0000 UTC m=+0.114399531 container start 5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:27:22 compute-0 podman[248411]: 2025-12-09 16:27:22.262003762 +0000 UTC m=+0.020009036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:27:22 compute-0 podman[248411]: 2025-12-09 16:27:22.359987848 +0000 UTC m=+0.117993112 container attach 5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:27:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:22 compute-0 angry_wilson[248427]: {
Dec 09 16:27:22 compute-0 angry_wilson[248427]:     "0": [
Dec 09 16:27:22 compute-0 angry_wilson[248427]:         {
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "devices": [
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "/dev/loop3"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             ],
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_name": "ceph_lv0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_size": "21470642176",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "name": "ceph_lv0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "tags": {
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cluster_name": "ceph",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.crush_device_class": "",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.encrypted": "0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.objectstore": "bluestore",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osd_id": "0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.type": "block",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.vdo": "0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.with_tpm": "0"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             },
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "type": "block",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "vg_name": "ceph_vg0"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:         }
Dec 09 16:27:22 compute-0 angry_wilson[248427]:     ],
Dec 09 16:27:22 compute-0 angry_wilson[248427]:     "1": [
Dec 09 16:27:22 compute-0 angry_wilson[248427]:         {
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "devices": [
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "/dev/loop4"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             ],
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_name": "ceph_lv1",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_size": "21470642176",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "name": "ceph_lv1",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "tags": {
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cluster_name": "ceph",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.crush_device_class": "",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.encrypted": "0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.objectstore": "bluestore",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osd_id": "1",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.type": "block",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.vdo": "0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.with_tpm": "0"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             },
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "type": "block",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "vg_name": "ceph_vg1"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:         }
Dec 09 16:27:22 compute-0 angry_wilson[248427]:     ],
Dec 09 16:27:22 compute-0 angry_wilson[248427]:     "2": [
Dec 09 16:27:22 compute-0 angry_wilson[248427]:         {
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "devices": [
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "/dev/loop5"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             ],
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_name": "ceph_lv2",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_size": "21470642176",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "name": "ceph_lv2",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "tags": {
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.cluster_name": "ceph",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.crush_device_class": "",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.encrypted": "0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.objectstore": "bluestore",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osd_id": "2",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.type": "block",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.vdo": "0",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:                 "ceph.with_tpm": "0"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             },
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "type": "block",
Dec 09 16:27:22 compute-0 angry_wilson[248427]:             "vg_name": "ceph_vg2"
Dec 09 16:27:22 compute-0 angry_wilson[248427]:         }
Dec 09 16:27:22 compute-0 angry_wilson[248427]:     ]
Dec 09 16:27:22 compute-0 angry_wilson[248427]: }
Dec 09 16:27:22 compute-0 systemd[1]: libpod-5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08.scope: Deactivated successfully.
Dec 09 16:27:22 compute-0 podman[248411]: 2025-12-09 16:27:22.652800255 +0000 UTC m=+0.410805539 container died 5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ebf64d8052db44ff6f87f0aaecf1f41751d379c4a161653fc2c37c14da49b43-merged.mount: Deactivated successfully.
Dec 09 16:27:22 compute-0 podman[248411]: 2025-12-09 16:27:22.950347916 +0000 UTC m=+0.708353200 container remove 5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:27:22 compute-0 sudo[248335]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:23 compute-0 systemd[1]: libpod-conmon-5a398ad7e3d2b121cb5ab0d3f2551d98a61b6850a3aa179a74df931bb0a72c08.scope: Deactivated successfully.
Dec 09 16:27:23 compute-0 sudo[248449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:27:23 compute-0 sudo[248449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:23 compute-0 sudo[248449]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:23 compute-0 sudo[248474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:27:23 compute-0 sudo[248474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:23 compute-0 podman[248511]: 2025-12-09 16:27:23.575401194 +0000 UTC m=+0.115307926 container create fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:27:23 compute-0 podman[248511]: 2025-12-09 16:27:23.498073451 +0000 UTC m=+0.037980263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:27:23 compute-0 systemd[1]: Started libpod-conmon-fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe.scope.
Dec 09 16:27:23 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:27:23 compute-0 podman[248511]: 2025-12-09 16:27:23.663178943 +0000 UTC m=+0.203085765 container init fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Dec 09 16:27:23 compute-0 podman[248511]: 2025-12-09 16:27:23.675299265 +0000 UTC m=+0.215206007 container start fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:27:23 compute-0 objective_dirac[248527]: 167 167
Dec 09 16:27:23 compute-0 systemd[1]: libpod-fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe.scope: Deactivated successfully.
Dec 09 16:27:23 compute-0 podman[248511]: 2025-12-09 16:27:23.679974017 +0000 UTC m=+0.219880749 container attach fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:27:23 compute-0 podman[248511]: 2025-12-09 16:27:23.680618545 +0000 UTC m=+0.220525297 container died fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dirac, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 16:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a08d13807492975458213198d32f5566cd06a2e32d89a68c622cda1f36d097c-merged.mount: Deactivated successfully.
Dec 09 16:27:23 compute-0 podman[248511]: 2025-12-09 16:27:23.714638186 +0000 UTC m=+0.254544928 container remove fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dirac, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:27:23 compute-0 systemd[1]: libpod-conmon-fd556ce1797dff4720bd65267ae0c2770db5593833e56b4166a36ab48ec368fe.scope: Deactivated successfully.
Dec 09 16:27:23 compute-0 ceph-mon[75222]: pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:23 compute-0 podman[248551]: 2025-12-09 16:27:23.865134855 +0000 UTC m=+0.040511985 container create f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:27:23 compute-0 systemd[1]: Started libpod-conmon-f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd.scope.
Dec 09 16:27:23 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6f23f4e350d7ed7dae0ed606a6dd60dda2afb9f18a7ddca1fba41ff06aab1f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6f23f4e350d7ed7dae0ed606a6dd60dda2afb9f18a7ddca1fba41ff06aab1f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6f23f4e350d7ed7dae0ed606a6dd60dda2afb9f18a7ddca1fba41ff06aab1f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6f23f4e350d7ed7dae0ed606a6dd60dda2afb9f18a7ddca1fba41ff06aab1f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:27:23 compute-0 podman[248551]: 2025-12-09 16:27:23.846116628 +0000 UTC m=+0.021493728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:27:23 compute-0 podman[248551]: 2025-12-09 16:27:23.955972559 +0000 UTC m=+0.131349629 container init f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_moser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:27:23 compute-0 podman[248551]: 2025-12-09 16:27:23.963874702 +0000 UTC m=+0.139251782 container start f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:27:23 compute-0 podman[248551]: 2025-12-09 16:27:23.967088383 +0000 UTC m=+0.142465463 container attach f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_moser, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:27:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:24 compute-0 lvm[248647]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:27:24 compute-0 lvm[248647]: VG ceph_vg0 finished
Dec 09 16:27:24 compute-0 lvm[248648]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:27:24 compute-0 lvm[248648]: VG ceph_vg1 finished
Dec 09 16:27:24 compute-0 lvm[248650]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:27:24 compute-0 lvm[248650]: VG ceph_vg2 finished
Dec 09 16:27:24 compute-0 agitated_moser[248569]: {}
Dec 09 16:27:24 compute-0 systemd[1]: libpod-f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd.scope: Deactivated successfully.
Dec 09 16:27:24 compute-0 systemd[1]: libpod-f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd.scope: Consumed 1.265s CPU time.
Dec 09 16:27:24 compute-0 podman[248551]: 2025-12-09 16:27:24.736192547 +0000 UTC m=+0.911569627 container died f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6f23f4e350d7ed7dae0ed606a6dd60dda2afb9f18a7ddca1fba41ff06aab1f0-merged.mount: Deactivated successfully.
Dec 09 16:27:24 compute-0 podman[248551]: 2025-12-09 16:27:24.776692961 +0000 UTC m=+0.952070051 container remove f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_moser, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:27:24 compute-0 systemd[1]: libpod-conmon-f9d0836a3a35fc5daa87aca22c8957970485aa2b62de1739b8f08b379feb6ffd.scope: Deactivated successfully.
Dec 09 16:27:24 compute-0 sudo[248474]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:27:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:27:24 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:24 compute-0 sudo[248666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:27:24 compute-0 sudo[248666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:27:24 compute-0 sudo[248666]: pam_unix(sudo:session): session closed for user root
Dec 09 16:27:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:25 compute-0 ceph-mon[75222]: pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:27:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:27:25
Dec 09 16:27:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:27:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:27:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.meta', '.rgw.root', 'volumes']
Dec 09 16:27:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:27:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:27:27 compute-0 ceph-mon[75222]: pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:29 compute-0 ceph-mon[75222]: pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:30 compute-0 podman[248692]: 2025-12-09 16:27:30.636863908 +0000 UTC m=+0.070020718 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 09 16:27:30 compute-0 podman[248691]: 2025-12-09 16:27:30.70175022 +0000 UTC m=+0.133173361 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 09 16:27:31 compute-0 ceph-mon[75222]: pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:33 compute-0 ceph-mon[75222]: pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:35 compute-0 ceph-mon[75222]: pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:27:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:27:37 compute-0 ceph-mon[75222]: pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:39 compute-0 ceph-mon[75222]: pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:41 compute-0 ceph-mon[75222]: pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:43 compute-0 ceph-mon[75222]: pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:44 compute-0 podman[248732]: 2025-12-09 16:27:44.689417367 +0000 UTC m=+0.116282594 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 16:27:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:45 compute-0 ceph-mon[75222]: pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:46 compute-0 sshd-session[248752]: Invalid user test from 146.190.31.45 port 58830
Dec 09 16:27:46 compute-0 sshd-session[248752]: Connection closed by invalid user test 146.190.31.45 port 58830 [preauth]
Dec 09 16:27:47 compute-0 ceph-mon[75222]: pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:49 compute-0 ceph-mon[75222]: pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:51 compute-0 ceph-mon[75222]: pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:53 compute-0 nova_compute[243452]: 2025-12-09 16:27:53.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:27:53 compute-0 nova_compute[243452]: 2025-12-09 16:27:53.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:27:53 compute-0 nova_compute[243452]: 2025-12-09 16:27:53.778 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:27:53 compute-0 nova_compute[243452]: 2025-12-09 16:27:53.779 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:27:53 compute-0 nova_compute[243452]: 2025-12-09 16:27:53.779 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:27:53 compute-0 nova_compute[243452]: 2025-12-09 16:27:53.779 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:27:53 compute-0 nova_compute[243452]: 2025-12-09 16:27:53.779 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:27:53 compute-0 ceph-mon[75222]: pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:27:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978560942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:27:54 compute-0 nova_compute[243452]: 2025-12-09 16:27:54.381 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:27:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:54 compute-0 nova_compute[243452]: 2025-12-09 16:27:54.555 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:27:54 compute-0 nova_compute[243452]: 2025-12-09 16:27:54.557 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:27:54 compute-0 nova_compute[243452]: 2025-12-09 16:27:54.557 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:27:54 compute-0 nova_compute[243452]: 2025-12-09 16:27:54.557 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:27:54 compute-0 nova_compute[243452]: 2025-12-09 16:27:54.779 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:27:54 compute-0 nova_compute[243452]: 2025-12-09 16:27:54.780 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:27:54 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3978560942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:27:54 compute-0 nova_compute[243452]: 2025-12-09 16:27:54.911 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing inventories for resource provider ca130087-db63-46e1-b278-a80bb66e6865 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 16:27:55 compute-0 nova_compute[243452]: 2025-12-09 16:27:55.000 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Updating ProviderTree inventory for provider ca130087-db63-46e1-b278-a80bb66e6865 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 16:27:55 compute-0 nova_compute[243452]: 2025-12-09 16:27:55.000 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Updating inventory in ProviderTree for provider ca130087-db63-46e1-b278-a80bb66e6865 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 16:27:55 compute-0 nova_compute[243452]: 2025-12-09 16:27:55.021 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing aggregate associations for resource provider ca130087-db63-46e1-b278-a80bb66e6865, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 16:27:55 compute-0 nova_compute[243452]: 2025-12-09 16:27:55.043 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing trait associations for resource provider ca130087-db63-46e1-b278-a80bb66e6865, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 16:27:55 compute-0 nova_compute[243452]: 2025-12-09 16:27:55.058 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:27:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:27:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526924670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:27:55 compute-0 nova_compute[243452]: 2025-12-09 16:27:55.581 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:27:55 compute-0 nova_compute[243452]: 2025-12-09 16:27:55.588 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:27:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:27:55 compute-0 ceph-mon[75222]: pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:55 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/526924670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:27:56 compute-0 nova_compute[243452]: 2025-12-09 16:27:56.341 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:27:56 compute-0 nova_compute[243452]: 2025-12-09 16:27:56.343 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:27:56 compute-0 nova_compute[243452]: 2025-12-09 16:27:56.343 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:27:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:27:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:27:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:27:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:27:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:27:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:27:57 compute-0 ceph-mon[75222]: pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:58 compute-0 nova_compute[243452]: 2025-12-09 16:27:58.336 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:27:58 compute-0 nova_compute[243452]: 2025-12-09 16:27:58.336 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:27:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:27:59 compute-0 ceph-mon[75222]: pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:01 compute-0 podman[248799]: 2025-12-09 16:28:01.615297476 +0000 UTC m=+0.056063334 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 09 16:28:01 compute-0 podman[248798]: 2025-12-09 16:28:01.650553162 +0000 UTC m=+0.094213032 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.898 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.898 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.898 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:28:01 compute-0 ceph-mon[75222]: pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.964 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.964 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.965 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.966 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.966 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.966 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:01 compute-0 nova_compute[243452]: 2025-12-09 16:28:01.967 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:28:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:03 compute-0 ceph-mon[75222]: pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:05 compute-0 ceph-mon[75222]: pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:07 compute-0 ceph-mon[75222]: pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:09 compute-0 ceph-mon[75222]: pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:28:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/220607119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:28:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:28:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/220607119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:28:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/220607119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:28:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/220607119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:28:11 compute-0 ceph-mon[75222]: pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:13 compute-0 ceph-mon[75222]: pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:15 compute-0 podman[248842]: 2025-12-09 16:28:15.616351384 +0000 UTC m=+0.063482523 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:28:16 compute-0 ceph-mon[75222]: pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:17 compute-0 ceph-mon[75222]: pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:28:17.847 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:28:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:28:17.847 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:28:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:28:17.848 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:28:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:19 compute-0 ceph-mon[75222]: pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:21 compute-0 ceph-mon[75222]: pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:23 compute-0 ceph-mon[75222]: pgmap v869: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:24 compute-0 sudo[248862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:28:24 compute-0 sudo[248862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:24 compute-0 sudo[248862]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:25 compute-0 sudo[248887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:28:25 compute-0 sudo[248887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:25 compute-0 sudo[248887]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:28:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:28:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:28:25 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:28:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:28:25 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:28:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:28:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:28:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:28:25 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:28:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:28:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:28:25 compute-0 sudo[248943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:28:25 compute-0 sudo[248943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:25 compute-0 sudo[248943]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:25 compute-0 ceph-mon[75222]: pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:28:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:28:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:28:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:28:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:28:25 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:28:25 compute-0 sudo[248968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:28:25 compute-0 sudo[248968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:28:25
Dec 09 16:28:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:28:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:28:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.meta']
Dec 09 16:28:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:28:26 compute-0 podman[249005]: 2025-12-09 16:28:26.035681456 +0000 UTC m=+0.056349872 container create 85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldberg, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:28:26 compute-0 systemd[1]: Started libpod-conmon-85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377.scope.
Dec 09 16:28:26 compute-0 podman[249005]: 2025-12-09 16:28:26.004906747 +0000 UTC m=+0.025575193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:28:26 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:28:26 compute-0 podman[249005]: 2025-12-09 16:28:26.118218526 +0000 UTC m=+0.138886992 container init 85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:28:26 compute-0 podman[249005]: 2025-12-09 16:28:26.128639051 +0000 UTC m=+0.149307467 container start 85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldberg, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 09 16:28:26 compute-0 podman[249005]: 2025-12-09 16:28:26.13179206 +0000 UTC m=+0.152460506 container attach 85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:28:26 compute-0 systemd[1]: libpod-85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377.scope: Deactivated successfully.
Dec 09 16:28:26 compute-0 crazy_goldberg[249021]: 167 167
Dec 09 16:28:26 compute-0 conmon[249021]: conmon 85cc4e13684919a21c0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377.scope/container/memory.events
Dec 09 16:28:26 compute-0 podman[249005]: 2025-12-09 16:28:26.136181654 +0000 UTC m=+0.156850080 container died 85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldberg, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 09 16:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa17763d890323702aaab72f4bed6a80f486a105084d8873301c90c7ca9a337-merged.mount: Deactivated successfully.
Dec 09 16:28:26 compute-0 podman[249005]: 2025-12-09 16:28:26.172986843 +0000 UTC m=+0.193655259 container remove 85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:28:26 compute-0 systemd[1]: libpod-conmon-85cc4e13684919a21c0da1f4bede0e4a21d5c7d7ec8ff27ba56403084a544377.scope: Deactivated successfully.
Dec 09 16:28:26 compute-0 podman[249045]: 2025-12-09 16:28:26.390798383 +0000 UTC m=+0.066585641 container create 7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:28:26 compute-0 systemd[1]: Started libpod-conmon-7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96.scope.
Dec 09 16:28:26 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:28:26 compute-0 podman[249045]: 2025-12-09 16:28:26.368662048 +0000 UTC m=+0.044449296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8479015cde8ba78a78bdeeb67f9329f81994f4259dbab112a5b38960d8e736e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8479015cde8ba78a78bdeeb67f9329f81994f4259dbab112a5b38960d8e736e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8479015cde8ba78a78bdeeb67f9329f81994f4259dbab112a5b38960d8e736e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8479015cde8ba78a78bdeeb67f9329f81994f4259dbab112a5b38960d8e736e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8479015cde8ba78a78bdeeb67f9329f81994f4259dbab112a5b38960d8e736e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:26 compute-0 podman[249045]: 2025-12-09 16:28:26.482457501 +0000 UTC m=+0.158244819 container init 7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:26 compute-0 podman[249045]: 2025-12-09 16:28:26.495772446 +0000 UTC m=+0.171559694 container start 7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:28:26 compute-0 podman[249045]: 2025-12-09 16:28:26.499631995 +0000 UTC m=+0.175419243 container attach 7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:28:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:28:26 compute-0 hardcore_morse[249061]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:28:26 compute-0 hardcore_morse[249061]: --> All data devices are unavailable
Dec 09 16:28:27 compute-0 systemd[1]: libpod-7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96.scope: Deactivated successfully.
Dec 09 16:28:27 compute-0 podman[249045]: 2025-12-09 16:28:27.019662688 +0000 UTC m=+0.695449946 container died 7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8479015cde8ba78a78bdeeb67f9329f81994f4259dbab112a5b38960d8e736e7-merged.mount: Deactivated successfully.
Dec 09 16:28:27 compute-0 podman[249045]: 2025-12-09 16:28:27.063059384 +0000 UTC m=+0.738846602 container remove 7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:28:27 compute-0 systemd[1]: libpod-conmon-7b1a7c96049b2781e25748da246b993bf927ed7b7b3a3eb0e68b2cea5f537c96.scope: Deactivated successfully.
Dec 09 16:28:27 compute-0 sudo[248968]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:27 compute-0 sudo[249093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:28:27 compute-0 sudo[249093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:27 compute-0 sudo[249093]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:27 compute-0 sudo[249118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:28:27 compute-0 sudo[249118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:27 compute-0 podman[249155]: 2025-12-09 16:28:27.523539245 +0000 UTC m=+0.047290846 container create 3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hodgkin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:28:27 compute-0 systemd[1]: Started libpod-conmon-3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16.scope.
Dec 09 16:28:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:28:27 compute-0 podman[249155]: 2025-12-09 16:28:27.496495451 +0000 UTC m=+0.020247142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:28:27 compute-0 podman[249155]: 2025-12-09 16:28:27.634354464 +0000 UTC m=+0.158106085 container init 3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hodgkin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:28:27 compute-0 podman[249155]: 2025-12-09 16:28:27.640308932 +0000 UTC m=+0.164060533 container start 3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec 09 16:28:27 compute-0 quizzical_hodgkin[249172]: 167 167
Dec 09 16:28:27 compute-0 systemd[1]: libpod-3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16.scope: Deactivated successfully.
Dec 09 16:28:27 compute-0 podman[249155]: 2025-12-09 16:28:27.679262662 +0000 UTC m=+0.203014283 container attach 3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hodgkin, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:28:27 compute-0 podman[249155]: 2025-12-09 16:28:27.679644602 +0000 UTC m=+0.203396203 container died 3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b804e7f538ff692489638fa777790df997b53d4fd52b476aa98092b192dbbd-merged.mount: Deactivated successfully.
Dec 09 16:28:27 compute-0 ceph-mon[75222]: pgmap v871: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:27 compute-0 podman[249155]: 2025-12-09 16:28:27.713024675 +0000 UTC m=+0.236776276 container remove 3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hodgkin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 09 16:28:27 compute-0 systemd[1]: libpod-conmon-3b5050c2c2812793ea532a44a2bcf3392e481daf0a3b3892101ede2d037aed16.scope: Deactivated successfully.
Dec 09 16:28:27 compute-0 podman[249198]: 2025-12-09 16:28:27.880750911 +0000 UTC m=+0.049546010 container create 4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:28:27 compute-0 systemd[1]: Started libpod-conmon-4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69.scope.
Dec 09 16:28:27 compute-0 podman[249198]: 2025-12-09 16:28:27.857845024 +0000 UTC m=+0.026640133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:28:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46685c5a5b57aa6466ecf55817f1a4e3be66c9fece5c39113fe23ae37ad93b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46685c5a5b57aa6466ecf55817f1a4e3be66c9fece5c39113fe23ae37ad93b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46685c5a5b57aa6466ecf55817f1a4e3be66c9fece5c39113fe23ae37ad93b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46685c5a5b57aa6466ecf55817f1a4e3be66c9fece5c39113fe23ae37ad93b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:27 compute-0 podman[249198]: 2025-12-09 16:28:27.975786304 +0000 UTC m=+0.144581423 container init 4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_maxwell, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:28:27 compute-0 podman[249198]: 2025-12-09 16:28:27.988194474 +0000 UTC m=+0.156989573 container start 4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:28:27 compute-0 podman[249198]: 2025-12-09 16:28:27.995024447 +0000 UTC m=+0.163819556 container attach 4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]: {
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:     "0": [
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:         {
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "devices": [
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "/dev/loop3"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             ],
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_name": "ceph_lv0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_size": "21470642176",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "name": "ceph_lv0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "tags": {
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cluster_name": "ceph",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.crush_device_class": "",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.encrypted": "0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.objectstore": "bluestore",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osd_id": "0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.type": "block",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.vdo": "0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.with_tpm": "0"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             },
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "type": "block",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "vg_name": "ceph_vg0"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:         }
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:     ],
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:     "1": [
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:         {
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "devices": [
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "/dev/loop4"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             ],
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_name": "ceph_lv1",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_size": "21470642176",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "name": "ceph_lv1",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "tags": {
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cluster_name": "ceph",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.crush_device_class": "",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.encrypted": "0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.objectstore": "bluestore",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osd_id": "1",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.type": "block",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.vdo": "0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.with_tpm": "0"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             },
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "type": "block",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "vg_name": "ceph_vg1"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:         }
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:     ],
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:     "2": [
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:         {
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "devices": [
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "/dev/loop5"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             ],
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_name": "ceph_lv2",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_size": "21470642176",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "name": "ceph_lv2",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "tags": {
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.cluster_name": "ceph",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.crush_device_class": "",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.encrypted": "0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.objectstore": "bluestore",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osd_id": "2",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.type": "block",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.vdo": "0",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:                 "ceph.with_tpm": "0"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             },
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "type": "block",
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:             "vg_name": "ceph_vg2"
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:         }
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]:     ]
Dec 09 16:28:28 compute-0 mystifying_maxwell[249215]: }
Dec 09 16:28:28 compute-0 systemd[1]: libpod-4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69.scope: Deactivated successfully.
Dec 09 16:28:28 compute-0 podman[249198]: 2025-12-09 16:28:28.319610512 +0000 UTC m=+0.488405571 container died 4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_maxwell, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c46685c5a5b57aa6466ecf55817f1a4e3be66c9fece5c39113fe23ae37ad93b9-merged.mount: Deactivated successfully.
Dec 09 16:28:28 compute-0 podman[249198]: 2025-12-09 16:28:28.369921322 +0000 UTC m=+0.538716381 container remove 4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_maxwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:28:28 compute-0 systemd[1]: libpod-conmon-4d389643475fb8f904e8c442bff6b33deee98af3335640dcf759912731d1de69.scope: Deactivated successfully.
Dec 09 16:28:28 compute-0 sudo[249118]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:28 compute-0 sudo[249237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:28:28 compute-0 sudo[249237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:28 compute-0 sudo[249237]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:28 compute-0 sudo[249262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:28:28 compute-0 sudo[249262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:28 compute-0 podman[249299]: 2025-12-09 16:28:28.917932424 +0000 UTC m=+0.050799105 container create d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 09 16:28:28 compute-0 systemd[1]: Started libpod-conmon-d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0.scope.
Dec 09 16:28:28 compute-0 podman[249299]: 2025-12-09 16:28:28.897053684 +0000 UTC m=+0.029920355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:28:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:28:29 compute-0 podman[249299]: 2025-12-09 16:28:29.038713694 +0000 UTC m=+0.171580445 container init d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:28:29 compute-0 podman[249299]: 2025-12-09 16:28:29.050403084 +0000 UTC m=+0.183269725 container start d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:28:29 compute-0 podman[249299]: 2025-12-09 16:28:29.054696395 +0000 UTC m=+0.187563076 container attach d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_boyd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:28:29 compute-0 silly_boyd[249315]: 167 167
Dec 09 16:28:29 compute-0 systemd[1]: libpod-d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0.scope: Deactivated successfully.
Dec 09 16:28:29 compute-0 podman[249299]: 2025-12-09 16:28:29.058202864 +0000 UTC m=+0.191069515 container died d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eb657f69c16782b9cf65f5a75c4c3c1a6d1cc76db62f61b21abf7e7ea5ebe69-merged.mount: Deactivated successfully.
Dec 09 16:28:29 compute-0 podman[249299]: 2025-12-09 16:28:29.108213856 +0000 UTC m=+0.241080537 container remove d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:28:29 compute-0 systemd[1]: libpod-conmon-d2fe6f0de745075fdd3e5bf4db1636a231b6bc38f258aaab039dd4dddc43dab0.scope: Deactivated successfully.
Dec 09 16:28:29 compute-0 podman[249338]: 2025-12-09 16:28:29.356160467 +0000 UTC m=+0.078277551 container create 2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:28:29 compute-0 systemd[1]: Started libpod-conmon-2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5.scope.
Dec 09 16:28:29 compute-0 podman[249338]: 2025-12-09 16:28:29.323102954 +0000 UTC m=+0.045220118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:28:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea1bc1dc2661ef0ea0659eafe9f723b774b732e3d39dc622d9f738be53f8d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea1bc1dc2661ef0ea0659eafe9f723b774b732e3d39dc622d9f738be53f8d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea1bc1dc2661ef0ea0659eafe9f723b774b732e3d39dc622d9f738be53f8d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea1bc1dc2661ef0ea0659eafe9f723b774b732e3d39dc622d9f738be53f8d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:28:29 compute-0 podman[249338]: 2025-12-09 16:28:29.451245872 +0000 UTC m=+0.173363036 container init 2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cannon, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:28:29 compute-0 podman[249338]: 2025-12-09 16:28:29.462117759 +0000 UTC m=+0.184234863 container start 2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:28:29 compute-0 podman[249338]: 2025-12-09 16:28:29.46641055 +0000 UTC m=+0.188527674 container attach 2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:28:29 compute-0 ceph-mon[75222]: pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:30 compute-0 lvm[249433]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:28:30 compute-0 lvm[249433]: VG ceph_vg0 finished
Dec 09 16:28:30 compute-0 lvm[249434]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:28:30 compute-0 lvm[249434]: VG ceph_vg1 finished
Dec 09 16:28:30 compute-0 lvm[249436]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:28:30 compute-0 lvm[249436]: VG ceph_vg2 finished
Dec 09 16:28:30 compute-0 wizardly_cannon[249355]: {}
Dec 09 16:28:30 compute-0 systemd[1]: libpod-2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5.scope: Deactivated successfully.
Dec 09 16:28:30 compute-0 podman[249338]: 2025-12-09 16:28:30.296195858 +0000 UTC m=+1.018312922 container died 2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:28:30 compute-0 systemd[1]: libpod-2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5.scope: Consumed 1.301s CPU time.
Dec 09 16:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0ea1bc1dc2661ef0ea0659eafe9f723b774b732e3d39dc622d9f738be53f8d1-merged.mount: Deactivated successfully.
Dec 09 16:28:30 compute-0 podman[249338]: 2025-12-09 16:28:30.343092482 +0000 UTC m=+1.065209546 container remove 2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:28:30 compute-0 systemd[1]: libpod-conmon-2a6a3da045e5b26d550503d25d6e9577fafbec6e7963e5622a19542fb61943b5.scope: Deactivated successfully.
Dec 09 16:28:30 compute-0 sudo[249262]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:28:30 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:28:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:28:30 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:28:30 compute-0 sudo[249452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:28:30 compute-0 sudo[249452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:28:30 compute-0 sudo[249452]: pam_unix(sudo:session): session closed for user root
Dec 09 16:28:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:28:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:28:31 compute-0 ceph-mon[75222]: pgmap v873: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:31 compute-0 sshd-session[249477]: Invalid user test from 146.190.31.45 port 52458
Dec 09 16:28:31 compute-0 podman[249480]: 2025-12-09 16:28:31.891446859 +0000 UTC m=+0.061900319 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:28:31 compute-0 sshd-session[249477]: Connection closed by invalid user test 146.190.31.45 port 52458 [preauth]
Dec 09 16:28:31 compute-0 podman[249479]: 2025-12-09 16:28:31.936661315 +0000 UTC m=+0.103139543 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 16:28:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:33 compute-0 ceph-mon[75222]: pgmap v874: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:35 compute-0 ceph-mon[75222]: pgmap v875: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.571327) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297715571415, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3470122, "memory_usage": 3529984, "flush_reason": "Manual Compaction"}
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297715591820, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3393343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16351, "largest_seqno": 18402, "table_properties": {"data_size": 3384079, "index_size": 5822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18463, "raw_average_key_size": 19, "raw_value_size": 3365608, "raw_average_value_size": 3615, "num_data_blocks": 264, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765297489, "oldest_key_time": 1765297489, "file_creation_time": 1765297715, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 20540 microseconds, and 8932 cpu microseconds.
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.591882) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3393343 bytes OK
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.591907) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.593200) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.593219) EVENT_LOG_v1 {"time_micros": 1765297715593212, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.593247) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3461531, prev total WAL file size 3461531, number of live WAL files 2.
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.594548) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3313KB)], [38(7679KB)]
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297715594597, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11256680, "oldest_snapshot_seqno": -1}
Dec 09 16:28:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4434 keys, 9469794 bytes, temperature: kUnknown
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297715669319, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9469794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9436468, "index_size": 21120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107206, "raw_average_key_size": 24, "raw_value_size": 9352758, "raw_average_value_size": 2109, "num_data_blocks": 895, "num_entries": 4434, "num_filter_entries": 4434, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765297715, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.669923) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9469794 bytes
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.671652) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.4 rd, 126.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4948, records dropped: 514 output_compression: NoCompression
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.671675) EVENT_LOG_v1 {"time_micros": 1765297715671663, "job": 18, "event": "compaction_finished", "compaction_time_micros": 74826, "compaction_time_cpu_micros": 23719, "output_level": 6, "num_output_files": 1, "total_output_size": 9469794, "num_input_records": 4948, "num_output_records": 4434, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297715672603, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297715674401, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.594471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.674593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.674601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.674603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.674604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:35 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:35.674606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:28:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:28:37 compute-0 ceph-mon[75222]: pgmap v876: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:39 compute-0 ceph-mon[75222]: pgmap v877: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:41 compute-0 ceph-mon[75222]: pgmap v878: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:43 compute-0 ceph-mon[75222]: pgmap v879: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:45 compute-0 ceph-mon[75222]: pgmap v880: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:46 compute-0 podman[249526]: 2025-12-09 16:28:46.612470073 +0000 UTC m=+0.056772194 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251202)
Dec 09 16:28:47 compute-0 ceph-mon[75222]: pgmap v881: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:49 compute-0 ceph-mon[75222]: pgmap v882: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:51 compute-0 ceph-mon[75222]: pgmap v883: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.078 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.079 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.079 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.079 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.079 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:28:53 compute-0 ceph-mon[75222]: pgmap v884: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:28:53 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376606867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.639 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.787 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.788 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.789 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.789 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.858 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.858 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:28:53 compute-0 nova_compute[243452]: 2025-12-09 16:28:53.888 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:28:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:28:54 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269268313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:28:54 compute-0 nova_compute[243452]: 2025-12-09 16:28:54.464 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:28:54 compute-0 nova_compute[243452]: 2025-12-09 16:28:54.469 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:28:54 compute-0 nova_compute[243452]: 2025-12-09 16:28:54.490 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:28:54 compute-0 nova_compute[243452]: 2025-12-09 16:28:54.491 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:28:54 compute-0 nova_compute[243452]: 2025-12-09 16:28:54.491 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:28:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:54 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2376606867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:28:54 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3269268313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:28:55 compute-0 nova_compute[243452]: 2025-12-09 16:28:55.491 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:28:55 compute-0 ceph-mon[75222]: pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.619462) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297735619502, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 403, "num_deletes": 250, "total_data_size": 296529, "memory_usage": 304832, "flush_reason": "Manual Compaction"}
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297735623131, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 240572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18403, "largest_seqno": 18805, "table_properties": {"data_size": 238232, "index_size": 440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5965, "raw_average_key_size": 19, "raw_value_size": 233648, "raw_average_value_size": 758, "num_data_blocks": 20, "num_entries": 308, "num_filter_entries": 308, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765297716, "oldest_key_time": 1765297716, "file_creation_time": 1765297735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3727 microseconds, and 1291 cpu microseconds.
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.623188) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 240572 bytes OK
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.623210) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.624478) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.624490) EVENT_LOG_v1 {"time_micros": 1765297735624486, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.624510) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 293986, prev total WAL file size 293986, number of live WAL files 2.
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.624884) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(234KB)], [41(9247KB)]
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297735624918, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9710366, "oldest_snapshot_seqno": -1}
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4239 keys, 6432339 bytes, temperature: kUnknown
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297735682229, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6432339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6404769, "index_size": 15877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103562, "raw_average_key_size": 24, "raw_value_size": 6328801, "raw_average_value_size": 1492, "num_data_blocks": 667, "num_entries": 4239, "num_filter_entries": 4239, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765297735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.682533) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6432339 bytes
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.683835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.1 rd, 112.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.0 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(67.1) write-amplify(26.7) OK, records in: 4742, records dropped: 503 output_compression: NoCompression
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.683854) EVENT_LOG_v1 {"time_micros": 1765297735683845, "job": 20, "event": "compaction_finished", "compaction_time_micros": 57424, "compaction_time_cpu_micros": 18183, "output_level": 6, "num_output_files": 1, "total_output_size": 6432339, "num_input_records": 4742, "num_output_records": 4239, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297735684069, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297735686093, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.624824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.686129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.686134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.686136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.686138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:28:55.686139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:28:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:28:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:28:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:28:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:28:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:28:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:28:57 compute-0 ceph-mon[75222]: pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:58 compute-0 nova_compute[243452]: 2025-12-09 16:28:58.048 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:28:59 compute-0 nova_compute[243452]: 2025-12-09 16:28:59.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:59 compute-0 nova_compute[243452]: 2025-12-09 16:28:59.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:28:59 compute-0 nova_compute[243452]: 2025-12-09 16:28:59.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:28:59 compute-0 nova_compute[243452]: 2025-12-09 16:28:59.079 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:28:59 compute-0 nova_compute[243452]: 2025-12-09 16:28:59.080 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:59 compute-0 nova_compute[243452]: 2025-12-09 16:28:59.080 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:28:59 compute-0 nova_compute[243452]: 2025-12-09 16:28:59.080 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:28:59 compute-0 ceph-mon[75222]: pgmap v887: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:00 compute-0 nova_compute[243452]: 2025-12-09 16:29:00.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:29:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:01 compute-0 nova_compute[243452]: 2025-12-09 16:29:01.056 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:29:01 compute-0 ceph-mon[75222]: pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:02 compute-0 nova_compute[243452]: 2025-12-09 16:29:02.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:29:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:02 compute-0 podman[249591]: 2025-12-09 16:29:02.609029833 +0000 UTC m=+0.053119641 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec 09 16:29:02 compute-0 podman[249590]: 2025-12-09 16:29:02.694513907 +0000 UTC m=+0.133552672 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:29:03 compute-0 ceph-mon[75222]: pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:05 compute-0 ceph-mon[75222]: pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:07 compute-0 ceph-mon[75222]: pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:09 compute-0 ceph-mon[75222]: pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:29:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4045702769' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:29:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:29:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4045702769' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:29:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/4045702769' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:29:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/4045702769' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:29:11 compute-0 ceph-mon[75222]: pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:13 compute-0 ceph-mon[75222]: pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:15 compute-0 ceph-mon[75222]: pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:16 compute-0 sshd-session[249636]: Invalid user test from 146.190.31.45 port 35792
Dec 09 16:29:16 compute-0 sshd-session[249636]: Connection closed by invalid user test 146.190.31.45 port 35792 [preauth]
Dec 09 16:29:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:17 compute-0 podman[249639]: 2025-12-09 16:29:17.606690051 +0000 UTC m=+0.057632190 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd)
Dec 09 16:29:17 compute-0 ceph-mon[75222]: pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:29:17.847 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:29:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:29:17.848 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:29:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:29:17.848 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:29:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:19 compute-0 ceph-mon[75222]: pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:21 compute-0 ceph-mon[75222]: pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:23 compute-0 ceph-mon[75222]: pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:25 compute-0 ceph-mon[75222]: pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:29:25
Dec 09 16:29:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:29:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:29:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes']
Dec 09 16:29:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:29:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:29:27 compute-0 ceph-mon[75222]: pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:29 compute-0 ceph-mon[75222]: pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:30 compute-0 sudo[249658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:29:30 compute-0 sudo[249658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:30 compute-0 sudo[249658]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:30 compute-0 sudo[249683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:29:30 compute-0 sudo[249683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:31 compute-0 sudo[249683]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:29:31 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:29:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:29:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:29:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:29:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:29:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:29:31 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:29:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:29:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:29:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:29:31 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:29:31 compute-0 sudo[249738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:29:31 compute-0 sudo[249738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:31 compute-0 sudo[249738]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:31 compute-0 sudo[249763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:29:31 compute-0 sudo[249763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:31 compute-0 ceph-mon[75222]: pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:29:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:29:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:29:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:29:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:29:31 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:29:31 compute-0 podman[249801]: 2025-12-09 16:29:31.78315818 +0000 UTC m=+0.045789373 container create 61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:29:31 compute-0 systemd[1]: Started libpod-conmon-61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9.scope.
Dec 09 16:29:31 compute-0 podman[249801]: 2025-12-09 16:29:31.762481002 +0000 UTC m=+0.025112175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:29:31 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:29:31 compute-0 podman[249801]: 2025-12-09 16:29:31.902284458 +0000 UTC m=+0.164915651 container init 61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:29:31 compute-0 podman[249801]: 2025-12-09 16:29:31.91115922 +0000 UTC m=+0.173790373 container start 61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:29:31 compute-0 podman[249801]: 2025-12-09 16:29:31.914372412 +0000 UTC m=+0.177003575 container attach 61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:29:31 compute-0 sweet_mclean[249818]: 167 167
Dec 09 16:29:31 compute-0 systemd[1]: libpod-61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9.scope: Deactivated successfully.
Dec 09 16:29:31 compute-0 podman[249801]: 2025-12-09 16:29:31.917762228 +0000 UTC m=+0.180393381 container died 61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:29:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-270baaa497a29ce43cab54f6e75e567732b32c740f41731eb81d651ac42e1f47-merged.mount: Deactivated successfully.
Dec 09 16:29:31 compute-0 podman[249801]: 2025-12-09 16:29:31.955609994 +0000 UTC m=+0.218241147 container remove 61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:29:31 compute-0 systemd[1]: libpod-conmon-61a030f20a8b41d858b601f337abadfd233c475ed6925927146cef95c658a8c9.scope: Deactivated successfully.
Dec 09 16:29:32 compute-0 podman[249842]: 2025-12-09 16:29:32.14638479 +0000 UTC m=+0.057856096 container create 2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:29:32 compute-0 systemd[1]: Started libpod-conmon-2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5.scope.
Dec 09 16:29:32 compute-0 podman[249842]: 2025-12-09 16:29:32.119558807 +0000 UTC m=+0.031030133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:29:32 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:29:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26efc35f443207282a62faca7b0c44cdc50a9c057eba5a165552eda8aa163a22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26efc35f443207282a62faca7b0c44cdc50a9c057eba5a165552eda8aa163a22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26efc35f443207282a62faca7b0c44cdc50a9c057eba5a165552eda8aa163a22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26efc35f443207282a62faca7b0c44cdc50a9c057eba5a165552eda8aa163a22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26efc35f443207282a62faca7b0c44cdc50a9c057eba5a165552eda8aa163a22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:32 compute-0 podman[249842]: 2025-12-09 16:29:32.241990699 +0000 UTC m=+0.153462075 container init 2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:29:32 compute-0 podman[249842]: 2025-12-09 16:29:32.254652489 +0000 UTC m=+0.166123825 container start 2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_faraday, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:29:32 compute-0 podman[249842]: 2025-12-09 16:29:32.259152767 +0000 UTC m=+0.170624103 container attach 2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_faraday, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:29:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:32 compute-0 laughing_faraday[249859]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:29:32 compute-0 laughing_faraday[249859]: --> All data devices are unavailable
Dec 09 16:29:32 compute-0 systemd[1]: libpod-2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5.scope: Deactivated successfully.
Dec 09 16:29:32 compute-0 conmon[249859]: conmon 2085b0e3f3ced14da931 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5.scope/container/memory.events
Dec 09 16:29:32 compute-0 podman[249842]: 2025-12-09 16:29:32.80007431 +0000 UTC m=+0.711545596 container died 2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:29:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-26efc35f443207282a62faca7b0c44cdc50a9c057eba5a165552eda8aa163a22-merged.mount: Deactivated successfully.
Dec 09 16:29:32 compute-0 podman[249842]: 2025-12-09 16:29:32.846309275 +0000 UTC m=+0.757780601 container remove 2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:29:32 compute-0 systemd[1]: libpod-conmon-2085b0e3f3ced14da9317ab20a010379154b8ffde7c2ddd70c7e87c842752cc5.scope: Deactivated successfully.
Dec 09 16:29:32 compute-0 sudo[249763]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:32 compute-0 podman[249887]: 2025-12-09 16:29:32.961439998 +0000 UTC m=+0.116095181 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:29:32 compute-0 podman[249880]: 2025-12-09 16:29:32.966473311 +0000 UTC m=+0.121060703 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 09 16:29:32 compute-0 sudo[249919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:29:32 compute-0 sudo[249919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:32 compute-0 sudo[249919]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:33 compute-0 sudo[249958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:29:33 compute-0 sudo[249958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:33 compute-0 podman[249995]: 2025-12-09 16:29:33.364965004 +0000 UTC m=+0.066422190 container create a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:29:33 compute-0 systemd[1]: Started libpod-conmon-a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92.scope.
Dec 09 16:29:33 compute-0 podman[249995]: 2025-12-09 16:29:33.338863162 +0000 UTC m=+0.040320428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:29:33 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:29:33 compute-0 podman[249995]: 2025-12-09 16:29:33.452319818 +0000 UTC m=+0.153777014 container init a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_leavitt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:29:33 compute-0 podman[249995]: 2025-12-09 16:29:33.461689925 +0000 UTC m=+0.163147101 container start a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:29:33 compute-0 podman[249995]: 2025-12-09 16:29:33.465011719 +0000 UTC m=+0.166468895 container attach a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:29:33 compute-0 admiring_leavitt[250012]: 167 167
Dec 09 16:29:33 compute-0 systemd[1]: libpod-a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92.scope: Deactivated successfully.
Dec 09 16:29:33 compute-0 conmon[250012]: conmon a76d02ce03d60ce5cad9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92.scope/container/memory.events
Dec 09 16:29:33 compute-0 podman[250017]: 2025-12-09 16:29:33.518362496 +0000 UTC m=+0.029938362 container died a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:29:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-285b02d5b1bafa1cf339eb7411b4bdbc3962dca18788a44cac9f353c1a9390de-merged.mount: Deactivated successfully.
Dec 09 16:29:33 compute-0 podman[250017]: 2025-12-09 16:29:33.554640468 +0000 UTC m=+0.066216314 container remove a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:29:33 compute-0 systemd[1]: libpod-conmon-a76d02ce03d60ce5cad900bea56fb67c59e5bac97acdbbbcc38a722078bc6a92.scope: Deactivated successfully.
Dec 09 16:29:33 compute-0 podman[250039]: 2025-12-09 16:29:33.759832293 +0000 UTC m=+0.039996468 container create 60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hertz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:29:33 compute-0 ceph-mon[75222]: pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:33 compute-0 systemd[1]: Started libpod-conmon-60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3.scope.
Dec 09 16:29:33 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39aeca4889ffb65127188919ce9a311ec214dca176d80cf078cf4cbde823aa14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39aeca4889ffb65127188919ce9a311ec214dca176d80cf078cf4cbde823aa14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39aeca4889ffb65127188919ce9a311ec214dca176d80cf078cf4cbde823aa14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39aeca4889ffb65127188919ce9a311ec214dca176d80cf078cf4cbde823aa14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:33 compute-0 podman[250039]: 2025-12-09 16:29:33.742665435 +0000 UTC m=+0.022829630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:29:33 compute-0 podman[250039]: 2025-12-09 16:29:33.841488666 +0000 UTC m=+0.121652881 container init 60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hertz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:29:33 compute-0 podman[250039]: 2025-12-09 16:29:33.853550329 +0000 UTC m=+0.133714504 container start 60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:29:33 compute-0 podman[250039]: 2025-12-09 16:29:33.857214053 +0000 UTC m=+0.137378308 container attach 60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]: {
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:     "0": [
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:         {
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "devices": [
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "/dev/loop3"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             ],
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_name": "ceph_lv0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_size": "21470642176",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "name": "ceph_lv0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "tags": {
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cluster_name": "ceph",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.crush_device_class": "",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.encrypted": "0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.objectstore": "bluestore",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osd_id": "0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.type": "block",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.vdo": "0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.with_tpm": "0"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             },
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "type": "block",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "vg_name": "ceph_vg0"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:         }
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:     ],
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:     "1": [
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:         {
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "devices": [
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "/dev/loop4"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             ],
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_name": "ceph_lv1",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_size": "21470642176",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "name": "ceph_lv1",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "tags": {
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cluster_name": "ceph",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.crush_device_class": "",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.encrypted": "0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.objectstore": "bluestore",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osd_id": "1",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.type": "block",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.vdo": "0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.with_tpm": "0"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             },
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "type": "block",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "vg_name": "ceph_vg1"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:         }
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:     ],
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:     "2": [
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:         {
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "devices": [
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "/dev/loop5"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             ],
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_name": "ceph_lv2",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_size": "21470642176",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "name": "ceph_lv2",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "tags": {
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.cluster_name": "ceph",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.crush_device_class": "",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.encrypted": "0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.objectstore": "bluestore",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osd_id": "2",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.type": "block",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.vdo": "0",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:                 "ceph.with_tpm": "0"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             },
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "type": "block",
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:             "vg_name": "ceph_vg2"
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:         }
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]:     ]
Dec 09 16:29:34 compute-0 vigilant_hertz[250056]: }
Dec 09 16:29:34 compute-0 systemd[1]: libpod-60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3.scope: Deactivated successfully.
Dec 09 16:29:34 compute-0 podman[250039]: 2025-12-09 16:29:34.212816666 +0000 UTC m=+0.492980881 container died 60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-39aeca4889ffb65127188919ce9a311ec214dca176d80cf078cf4cbde823aa14-merged.mount: Deactivated successfully.
Dec 09 16:29:34 compute-0 podman[250039]: 2025-12-09 16:29:34.264124135 +0000 UTC m=+0.544288320 container remove 60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hertz, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:29:34 compute-0 systemd[1]: libpod-conmon-60368e04f6bbd5d457432ec54321a8b8572fea90eac773e432ad719bc036f3f3.scope: Deactivated successfully.
Dec 09 16:29:34 compute-0 sudo[249958]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:34 compute-0 sudo[250077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:29:34 compute-0 sudo[250077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:34 compute-0 sudo[250077]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:34 compute-0 sudo[250102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:29:34 compute-0 sudo[250102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:34 compute-0 podman[250140]: 2025-12-09 16:29:34.759022839 +0000 UTC m=+0.040942465 container create e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cori, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 16:29:34 compute-0 systemd[1]: Started libpod-conmon-e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b.scope.
Dec 09 16:29:34 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:29:34 compute-0 podman[250140]: 2025-12-09 16:29:34.740528183 +0000 UTC m=+0.022447809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:29:34 compute-0 podman[250140]: 2025-12-09 16:29:34.841006031 +0000 UTC m=+0.122925657 container init e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:29:34 compute-0 podman[250140]: 2025-12-09 16:29:34.846129317 +0000 UTC m=+0.128048953 container start e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:29:34 compute-0 cool_cori[250156]: 167 167
Dec 09 16:29:34 compute-0 podman[250140]: 2025-12-09 16:29:34.849840602 +0000 UTC m=+0.131760198 container attach e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cori, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:29:34 compute-0 systemd[1]: libpod-e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b.scope: Deactivated successfully.
Dec 09 16:29:34 compute-0 podman[250140]: 2025-12-09 16:29:34.851781517 +0000 UTC m=+0.133701113 container died e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6225c80715044b2a9d3aa0bf51aec844f2b8dda95f8dadeb6b332521eac1ca1f-merged.mount: Deactivated successfully.
Dec 09 16:29:34 compute-0 podman[250140]: 2025-12-09 16:29:34.889815009 +0000 UTC m=+0.171734605 container remove e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:29:34 compute-0 systemd[1]: libpod-conmon-e309ea719f7551c58ea3aeb8d8ac9a92964347da6acbdfb9f66b2e21e21aa57b.scope: Deactivated successfully.
Dec 09 16:29:35 compute-0 podman[250179]: 2025-12-09 16:29:35.103487186 +0000 UTC m=+0.054189463 container create 4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jang, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:29:35 compute-0 systemd[1]: Started libpod-conmon-4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c.scope.
Dec 09 16:29:35 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:29:35 compute-0 podman[250179]: 2025-12-09 16:29:35.074628825 +0000 UTC m=+0.025331142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cfed8f3fab82f5aac5b254fd3bb8e6ff345885484a2232e90cc616e7e6d432/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cfed8f3fab82f5aac5b254fd3bb8e6ff345885484a2232e90cc616e7e6d432/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cfed8f3fab82f5aac5b254fd3bb8e6ff345885484a2232e90cc616e7e6d432/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cfed8f3fab82f5aac5b254fd3bb8e6ff345885484a2232e90cc616e7e6d432/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:29:35 compute-0 podman[250179]: 2025-12-09 16:29:35.186296101 +0000 UTC m=+0.136998348 container init 4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:29:35 compute-0 podman[250179]: 2025-12-09 16:29:35.19682465 +0000 UTC m=+0.147526897 container start 4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jang, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:29:35 compute-0 podman[250179]: 2025-12-09 16:29:35.201217905 +0000 UTC m=+0.151920172 container attach 4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jang, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:29:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:35 compute-0 ceph-mon[75222]: pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:35 compute-0 lvm[250276]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:29:35 compute-0 lvm[250276]: VG ceph_vg1 finished
Dec 09 16:29:35 compute-0 lvm[250275]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:29:35 compute-0 lvm[250275]: VG ceph_vg0 finished
Dec 09 16:29:35 compute-0 lvm[250278]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:29:35 compute-0 lvm[250278]: VG ceph_vg2 finished
Dec 09 16:29:35 compute-0 vigorous_jang[250197]: {}
Dec 09 16:29:36 compute-0 systemd[1]: libpod-4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c.scope: Deactivated successfully.
Dec 09 16:29:36 compute-0 podman[250179]: 2025-12-09 16:29:36.023959763 +0000 UTC m=+0.974662010 container died 4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:29:36 compute-0 systemd[1]: libpod-4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c.scope: Consumed 1.370s CPU time.
Dec 09 16:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-28cfed8f3fab82f5aac5b254fd3bb8e6ff345885484a2232e90cc616e7e6d432-merged.mount: Deactivated successfully.
Dec 09 16:29:36 compute-0 podman[250179]: 2025-12-09 16:29:36.069655332 +0000 UTC m=+1.020357569 container remove 4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jang, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:29:36 compute-0 systemd[1]: libpod-conmon-4aebfe96c2aef37ed0fbae80a7ca0f07b36ff56b2ad45ff0ec918eaba1cb178c.scope: Deactivated successfully.
Dec 09 16:29:36 compute-0 sudo[250102]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:29:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:29:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:29:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:29:36 compute-0 sudo[250292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:29:36 compute-0 sudo[250292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:29:36 compute-0 sudo[250292]: pam_unix(sudo:session): session closed for user root
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.0333656678172135e-06 of space, bias 4.0, pg target 0.002440038801380656 quantized to 16 (current 16)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:29:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:29:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:29:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:29:38 compute-0 ceph-mon[75222]: pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:40 compute-0 ceph-mon[75222]: pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:42 compute-0 ceph-mon[75222]: pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:44 compute-0 ceph-mon[75222]: pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:45 compute-0 ceph-mon[75222]: pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:47 compute-0 ceph-mon[75222]: pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:48 compute-0 podman[250317]: 2025-12-09 16:29:48.62953721 +0000 UTC m=+0.068874580 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 09 16:29:49 compute-0 ceph-mon[75222]: pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:51 compute-0 ceph-mon[75222]: pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:53 compute-0 ceph-mon[75222]: pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.097 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.097 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.097 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.097 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.098 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:29:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:29:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613334510' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.646 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:29:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.819 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.820 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5126MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.820 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.820 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:29:55 compute-0 ceph-mon[75222]: pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:55 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3613334510' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.890 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.890 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:29:55 compute-0 nova_compute[243452]: 2025-12-09 16:29:55.918 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:29:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:29:56 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043230043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:29:56 compute-0 nova_compute[243452]: 2025-12-09 16:29:56.475 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:29:56 compute-0 nova_compute[243452]: 2025-12-09 16:29:56.482 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:29:56 compute-0 nova_compute[243452]: 2025-12-09 16:29:56.501 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:29:56 compute-0 nova_compute[243452]: 2025-12-09 16:29:56.503 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:29:56 compute-0 nova_compute[243452]: 2025-12-09 16:29:56.504 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:29:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:29:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:29:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:29:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:29:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:29:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:29:56 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2043230043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:29:57 compute-0 ceph-mon[75222]: pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:29:58 compute-0 sshd-session[250382]: Invalid user test from 146.190.31.45 port 52864
Dec 09 16:29:58 compute-0 sshd-session[250382]: Connection closed by invalid user test 146.190.31.45 port 52864 [preauth]
Dec 09 16:29:59 compute-0 nova_compute[243452]: 2025-12-09 16:29:59.497 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:29:59 compute-0 nova_compute[243452]: 2025-12-09 16:29:59.498 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:29:59 compute-0 nova_compute[243452]: 2025-12-09 16:29:59.498 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:29:59 compute-0 nova_compute[243452]: 2025-12-09 16:29:59.499 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:29:59 compute-0 nova_compute[243452]: 2025-12-09 16:29:59.714 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:29:59 compute-0 nova_compute[243452]: 2025-12-09 16:29:59.714 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:29:59 compute-0 nova_compute[243452]: 2025-12-09 16:29:59.714 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:29:59 compute-0 ceph-mon[75222]: pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:00 compute-0 nova_compute[243452]: 2025-12-09 16:30:00.263 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:30:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:30:01 compute-0 nova_compute[243452]: 2025-12-09 16:30:01.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:30:01 compute-0 nova_compute[243452]: 2025-12-09 16:30:01.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:30:01 compute-0 nova_compute[243452]: 2025-12-09 16:30:01.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:30:01 compute-0 ceph-mon[75222]: pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:03 compute-0 podman[250385]: 2025-12-09 16:30:03.613844952 +0000 UTC m=+0.051927617 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 16:30:03 compute-0 podman[250384]: 2025-12-09 16:30:03.686435017 +0000 UTC m=+0.125197972 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 09 16:30:03 compute-0 ceph-mon[75222]: pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:04 compute-0 nova_compute[243452]: 2025-12-09 16:30:04.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:30:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:30:05 compute-0 ceph-mon[75222]: pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 09 16:30:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 09 16:30:06 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 09 16:30:07 compute-0 ceph-mon[75222]: pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:07 compute-0 ceph-mon[75222]: osdmap e131: 3 total, 3 up, 3 in
Dec 09 16:30:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 09 16:30:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 09 16:30:07 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 09 16:30:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 09 16:30:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 09 16:30:08 compute-0 ceph-mon[75222]: osdmap e132: 3 total, 3 up, 3 in
Dec 09 16:30:08 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 09 16:30:09 compute-0 ceph-mon[75222]: pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:09 compute-0 ceph-mon[75222]: osdmap e133: 3 total, 3 up, 3 in
Dec 09 16:30:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:30:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1456640298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:30:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:30:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1456640298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:30:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 21 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.4 MiB/s wr, 30 op/s
Dec 09 16:30:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 16:30:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 09 16:30:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1456640298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:30:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1456640298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:30:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 09 16:30:10 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 09 16:30:11 compute-0 ceph-mon[75222]: pgmap v926: 305 pgs: 305 active+clean; 21 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.4 MiB/s wr, 30 op/s
Dec 09 16:30:11 compute-0 ceph-mon[75222]: osdmap e134: 3 total, 3 up, 3 in
Dec 09 16:30:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 21 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.7 MiB/s wr, 32 op/s
Dec 09 16:30:14 compute-0 ceph-mon[75222]: pgmap v928: 305 pgs: 305 active+clean; 21 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.7 MiB/s wr, 32 op/s
Dec 09 16:30:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.5 MiB/s wr, 57 op/s
Dec 09 16:30:15 compute-0 ceph-mon[75222]: pgmap v929: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.5 MiB/s wr, 57 op/s
Dec 09 16:30:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 09 16:30:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 09 16:30:15 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 09 16:30:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.3 MiB/s wr, 43 op/s
Dec 09 16:30:16 compute-0 ceph-mon[75222]: osdmap e135: 3 total, 3 up, 3 in
Dec 09 16:30:17 compute-0 ceph-mon[75222]: pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.3 MiB/s wr, 43 op/s
Dec 09 16:30:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:30:17.849 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:30:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:30:17.849 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:30:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:30:17.849 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:30:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Dec 09 16:30:19 compute-0 podman[250429]: 2025-12-09 16:30:19.683830859 +0000 UTC m=+0.085793091 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:30:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 09 16:30:19 compute-0 ceph-mon[75222]: pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Dec 09 16:30:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 09 16:30:19 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 09 16:30:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 26 op/s
Dec 09 16:30:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:20 compute-0 ceph-mon[75222]: osdmap e136: 3 total, 3 up, 3 in
Dec 09 16:30:21 compute-0 ceph-mon[75222]: pgmap v934: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 26 op/s
Dec 09 16:30:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 568 KiB/s wr, 1 op/s
Dec 09 16:30:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 09 16:30:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 09 16:30:22 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 09 16:30:23 compute-0 ceph-mon[75222]: pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 568 KiB/s wr, 1 op/s
Dec 09 16:30:23 compute-0 ceph-mon[75222]: osdmap e137: 3 total, 3 up, 3 in
Dec 09 16:30:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 4.9 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Dec 09 16:30:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 09 16:30:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 09 16:30:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.738970) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297825739026, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1048, "num_deletes": 258, "total_data_size": 1463518, "memory_usage": 1495424, "flush_reason": "Manual Compaction"}
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297825753407, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1449054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18806, "largest_seqno": 19853, "table_properties": {"data_size": 1443858, "index_size": 2656, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10636, "raw_average_key_size": 18, "raw_value_size": 1433430, "raw_average_value_size": 2541, "num_data_blocks": 120, "num_entries": 564, "num_filter_entries": 564, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765297736, "oldest_key_time": 1765297736, "file_creation_time": 1765297825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 14489 microseconds, and 8213 cpu microseconds.
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.753458) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1449054 bytes OK
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.753480) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.754913) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.754930) EVENT_LOG_v1 {"time_micros": 1765297825754925, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.754951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1458564, prev total WAL file size 1458564, number of live WAL files 2.
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.755754) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1415KB)], [44(6281KB)]
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297825755822, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7881393, "oldest_snapshot_seqno": -1}
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4272 keys, 7758545 bytes, temperature: kUnknown
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297825804053, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7758545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7728682, "index_size": 18105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 105379, "raw_average_key_size": 24, "raw_value_size": 7650054, "raw_average_value_size": 1790, "num_data_blocks": 760, "num_entries": 4272, "num_filter_entries": 4272, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765297825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:30:25 compute-0 ceph-mon[75222]: pgmap v937: 305 pgs: 305 active+clean; 4.9 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Dec 09 16:30:25 compute-0 ceph-mon[75222]: osdmap e138: 3 total, 3 up, 3 in
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.804450) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7758545 bytes
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.805610) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.1 rd, 160.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 6.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.8) write-amplify(5.4) OK, records in: 4803, records dropped: 531 output_compression: NoCompression
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.805636) EVENT_LOG_v1 {"time_micros": 1765297825805624, "job": 22, "event": "compaction_finished", "compaction_time_micros": 48334, "compaction_time_cpu_micros": 26776, "output_level": 6, "num_output_files": 1, "total_output_size": 7758545, "num_input_records": 4803, "num_output_records": 4272, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297825806172, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297825807931, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.755644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.808002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.808008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.808009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.808011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:30:25 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:30:25.808013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:30:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:30:25
Dec 09 16:30:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:30:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:30:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'images', '.rgw.root', '.mgr']
Dec 09 16:30:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.1 KiB/s wr, 74 op/s
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:30:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:30:27 compute-0 ceph-mon[75222]: pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.1 KiB/s wr, 74 op/s
Dec 09 16:30:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Dec 09 16:30:29 compute-0 ceph-mon[75222]: pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Dec 09 16:30:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Dec 09 16:30:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:31 compute-0 ceph-mon[75222]: pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Dec 09 16:30:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 50 op/s
Dec 09 16:30:33 compute-0 ceph-mon[75222]: pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 50 op/s
Dec 09 16:30:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 307 B/s wr, 1 op/s
Dec 09 16:30:34 compute-0 podman[250451]: 2025-12-09 16:30:34.611646077 +0000 UTC m=+0.049809278 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:30:34 compute-0 podman[250450]: 2025-12-09 16:30:34.654484415 +0000 UTC m=+0.094994403 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 16:30:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 09 16:30:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 09 16:30:35 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 09 16:30:36 compute-0 ceph-mon[75222]: pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 307 B/s wr, 1 op/s
Dec 09 16:30:36 compute-0 ceph-mon[75222]: osdmap e139: 3 total, 3 up, 3 in
Dec 09 16:30:36 compute-0 sudo[250497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:30:36 compute-0 sudo[250497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:36 compute-0 sudo[250497]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:36 compute-0 sudo[250522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:30:36 compute-0 sudo[250522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.7398652627364557e-07 of space, bias 1.0, pg target 8.219595788209367e-05 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1139873830789452e-06 of space, bias 4.0, pg target 0.0025367848596947345 quantized to 16 (current 16)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:30:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:30:36 compute-0 sudo[250522]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:30:36 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:30:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:30:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:30:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:30:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:30:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:30:36 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:30:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:30:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:30:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:30:36 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:30:37 compute-0 sudo[250578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:30:37 compute-0 sudo[250578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:37 compute-0 sudo[250578]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:37 compute-0 sudo[250603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:30:37 compute-0 sudo[250603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:30:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:30:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:30:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:30:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:30:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:30:37 compute-0 podman[250640]: 2025-12-09 16:30:37.540981823 +0000 UTC m=+0.114556399 container create 53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lumiere, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:30:37 compute-0 podman[250640]: 2025-12-09 16:30:37.461647957 +0000 UTC m=+0.035222563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:30:37 compute-0 systemd[1]: Started libpod-conmon-53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67.scope.
Dec 09 16:30:37 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:30:37 compute-0 podman[250640]: 2025-12-09 16:30:37.757579333 +0000 UTC m=+0.331153939 container init 53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lumiere, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:30:37 compute-0 podman[250640]: 2025-12-09 16:30:37.765338183 +0000 UTC m=+0.338912739 container start 53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lumiere, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:30:37 compute-0 focused_lumiere[250657]: 167 167
Dec 09 16:30:37 compute-0 systemd[1]: libpod-53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67.scope: Deactivated successfully.
Dec 09 16:30:37 compute-0 conmon[250657]: conmon 53d28d31a3c76346e775 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67.scope/container/memory.events
Dec 09 16:30:37 compute-0 podman[250640]: 2025-12-09 16:30:37.78631106 +0000 UTC m=+0.359885706 container attach 53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:30:37 compute-0 podman[250640]: 2025-12-09 16:30:37.786812644 +0000 UTC m=+0.360387230 container died 53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b975944f48a5e315e0d8b143145b06df2586341099521d583a95ae51bf92b14-merged.mount: Deactivated successfully.
Dec 09 16:30:37 compute-0 podman[250640]: 2025-12-09 16:30:37.828008166 +0000 UTC m=+0.401582722 container remove 53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lumiere, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:30:37 compute-0 systemd[1]: libpod-conmon-53d28d31a3c76346e775a45d02428aeeb211804827c9634751014ab94be9ce67.scope: Deactivated successfully.
Dec 09 16:30:37 compute-0 podman[250679]: 2025-12-09 16:30:37.973439832 +0000 UTC m=+0.039448463 container create 9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chandrasekhar, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:30:38 compute-0 systemd[1]: Started libpod-conmon-9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2.scope.
Dec 09 16:30:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34a739fdc62cd16d285f85139d31eafd8bbd8670c4518cc54e688519c37f3f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34a739fdc62cd16d285f85139d31eafd8bbd8670c4518cc54e688519c37f3f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34a739fdc62cd16d285f85139d31eafd8bbd8670c4518cc54e688519c37f3f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34a739fdc62cd16d285f85139d31eafd8bbd8670c4518cc54e688519c37f3f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34a739fdc62cd16d285f85139d31eafd8bbd8670c4518cc54e688519c37f3f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:38 compute-0 podman[250679]: 2025-12-09 16:30:37.956782328 +0000 UTC m=+0.022790989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:30:38 compute-0 podman[250679]: 2025-12-09 16:30:38.058337256 +0000 UTC m=+0.124345897 container init 9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:30:38 compute-0 podman[250679]: 2025-12-09 16:30:38.068239718 +0000 UTC m=+0.134248349 container start 9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:30:38 compute-0 podman[250679]: 2025-12-09 16:30:38.072050996 +0000 UTC m=+0.138059627 container attach 9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:30:38 compute-0 ceph-mon[75222]: pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:38 compute-0 lucid_chandrasekhar[250696]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:30:38 compute-0 lucid_chandrasekhar[250696]: --> All data devices are unavailable
Dec 09 16:30:38 compute-0 systemd[1]: libpod-9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2.scope: Deactivated successfully.
Dec 09 16:30:38 compute-0 podman[250679]: 2025-12-09 16:30:38.588062321 +0000 UTC m=+0.654070962 container died 9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c34a739fdc62cd16d285f85139d31eafd8bbd8670c4518cc54e688519c37f3f9-merged.mount: Deactivated successfully.
Dec 09 16:30:38 compute-0 podman[250679]: 2025-12-09 16:30:38.637343452 +0000 UTC m=+0.703352093 container remove 9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:30:38 compute-0 systemd[1]: libpod-conmon-9c1add2aaeebce5acc1cecc0d644fccd65606684f1c556543043dca09e1338e2.scope: Deactivated successfully.
Dec 09 16:30:38 compute-0 sudo[250603]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:38 compute-0 sudo[250730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:30:38 compute-0 sudo[250730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:38 compute-0 sudo[250730]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:38 compute-0 sudo[250755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:30:38 compute-0 sudo[250755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:38 compute-0 sshd-session[250713]: Invalid user test from 146.190.31.45 port 53808
Dec 09 16:30:38 compute-0 sshd-session[250713]: Connection closed by invalid user test 146.190.31.45 port 53808 [preauth]
Dec 09 16:30:39 compute-0 podman[250794]: 2025-12-09 16:30:39.119573356 +0000 UTC m=+0.051220427 container create 803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:30:39 compute-0 systemd[1]: Started libpod-conmon-803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819.scope.
Dec 09 16:30:39 compute-0 podman[250794]: 2025-12-09 16:30:39.099187747 +0000 UTC m=+0.030834868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:30:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:30:39 compute-0 podman[250794]: 2025-12-09 16:30:39.211237483 +0000 UTC m=+0.142884604 container init 803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:30:39 compute-0 podman[250794]: 2025-12-09 16:30:39.217923903 +0000 UTC m=+0.149570984 container start 803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:30:39 compute-0 podman[250794]: 2025-12-09 16:30:39.221535906 +0000 UTC m=+0.153182997 container attach 803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:30:39 compute-0 cool_elbakyan[250811]: 167 167
Dec 09 16:30:39 compute-0 systemd[1]: libpod-803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819.scope: Deactivated successfully.
Dec 09 16:30:39 compute-0 podman[250794]: 2025-12-09 16:30:39.225021115 +0000 UTC m=+0.156668186 container died 803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-976423ac8d2d974ddcfe13dec02ba78c3ba2f0cc199bb675e64d6d30cc11b9d5-merged.mount: Deactivated successfully.
Dec 09 16:30:39 compute-0 podman[250794]: 2025-12-09 16:30:39.268676517 +0000 UTC m=+0.200323608 container remove 803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:30:39 compute-0 systemd[1]: libpod-conmon-803d7e2e0e2d46854086a8dfb468117d8ce50be325501361773fccd1504fd819.scope: Deactivated successfully.
Dec 09 16:30:39 compute-0 podman[250834]: 2025-12-09 16:30:39.463458936 +0000 UTC m=+0.057246369 container create 5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 16:30:39 compute-0 systemd[1]: Started libpod-conmon-5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef.scope.
Dec 09 16:30:39 compute-0 podman[250834]: 2025-12-09 16:30:39.43370766 +0000 UTC m=+0.027495143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:30:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5cfb4bc7a4bc76fc42b96ae5b7581546ef20a189eafc3de253a4849592c2df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5cfb4bc7a4bc76fc42b96ae5b7581546ef20a189eafc3de253a4849592c2df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5cfb4bc7a4bc76fc42b96ae5b7581546ef20a189eafc3de253a4849592c2df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5cfb4bc7a4bc76fc42b96ae5b7581546ef20a189eafc3de253a4849592c2df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:39 compute-0 podman[250834]: 2025-12-09 16:30:39.570803899 +0000 UTC m=+0.164591372 container init 5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 09 16:30:39 compute-0 podman[250834]: 2025-12-09 16:30:39.57822336 +0000 UTC m=+0.172010803 container start 5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:30:39 compute-0 podman[250834]: 2025-12-09 16:30:39.58140153 +0000 UTC m=+0.175188963 container attach 5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:30:39 compute-0 mystifying_curie[250850]: {
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:     "0": [
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:         {
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "devices": [
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "/dev/loop3"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             ],
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_name": "ceph_lv0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_size": "21470642176",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "name": "ceph_lv0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "tags": {
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cluster_name": "ceph",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.crush_device_class": "",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.encrypted": "0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.objectstore": "bluestore",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osd_id": "0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.type": "block",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.vdo": "0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.with_tpm": "0"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             },
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "type": "block",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "vg_name": "ceph_vg0"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:         }
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:     ],
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:     "1": [
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:         {
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "devices": [
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "/dev/loop4"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             ],
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_name": "ceph_lv1",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_size": "21470642176",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "name": "ceph_lv1",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "tags": {
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cluster_name": "ceph",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.crush_device_class": "",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.encrypted": "0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.objectstore": "bluestore",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osd_id": "1",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.type": "block",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.vdo": "0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.with_tpm": "0"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             },
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "type": "block",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "vg_name": "ceph_vg1"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:         }
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:     ],
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:     "2": [
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:         {
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "devices": [
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "/dev/loop5"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             ],
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_name": "ceph_lv2",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_size": "21470642176",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "name": "ceph_lv2",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "tags": {
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.cluster_name": "ceph",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.crush_device_class": "",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.encrypted": "0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.objectstore": "bluestore",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osd_id": "2",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.type": "block",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.vdo": "0",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:                 "ceph.with_tpm": "0"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             },
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "type": "block",
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:             "vg_name": "ceph_vg2"
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:         }
Dec 09 16:30:39 compute-0 mystifying_curie[250850]:     ]
Dec 09 16:30:39 compute-0 mystifying_curie[250850]: }
Dec 09 16:30:39 compute-0 systemd[1]: libpod-5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef.scope: Deactivated successfully.
Dec 09 16:30:39 compute-0 podman[250834]: 2025-12-09 16:30:39.862973838 +0000 UTC m=+0.456761271 container died 5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 09 16:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b5cfb4bc7a4bc76fc42b96ae5b7581546ef20a189eafc3de253a4849592c2df-merged.mount: Deactivated successfully.
Dec 09 16:30:40 compute-0 podman[250834]: 2025-12-09 16:30:40.132013629 +0000 UTC m=+0.725801062 container remove 5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:30:40 compute-0 sudo[250755]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:40 compute-0 ceph-mon[75222]: pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:30:40 compute-0 systemd[1]: libpod-conmon-5e31df9a6bd875c97c58e5f30fa25e8a39a60a4d6796d25293f98a6221d5c4ef.scope: Deactivated successfully.
Dec 09 16:30:40 compute-0 sudo[250870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:30:40 compute-0 sudo[250870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:40 compute-0 sudo[250870]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:40 compute-0 sudo[250895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:30:40 compute-0 sudo[250895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:30:40 compute-0 podman[250931]: 2025-12-09 16:30:40.664859613 +0000 UTC m=+0.023385976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:30:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:40 compute-0 podman[250931]: 2025-12-09 16:30:40.840654012 +0000 UTC m=+0.199180395 container create 7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:30:40 compute-0 systemd[1]: Started libpod-conmon-7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5.scope.
Dec 09 16:30:40 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:30:40 compute-0 podman[250931]: 2025-12-09 16:30:40.998999704 +0000 UTC m=+0.357526067 container init 7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_vaughan, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:30:41 compute-0 podman[250931]: 2025-12-09 16:30:41.005119218 +0000 UTC m=+0.363645561 container start 7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:30:41 compute-0 pensive_vaughan[250947]: 167 167
Dec 09 16:30:41 compute-0 systemd[1]: libpod-7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5.scope: Deactivated successfully.
Dec 09 16:30:41 compute-0 podman[250931]: 2025-12-09 16:30:41.179772675 +0000 UTC m=+0.538299048 container attach 7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:30:41 compute-0 podman[250931]: 2025-12-09 16:30:41.180680651 +0000 UTC m=+0.539207024 container died 7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_vaughan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:30:41 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b24fe675ec6a2e5e9342701efe8a5a90862543bfabadf88c841b76ca6f788dbf-merged.mount: Deactivated successfully.
Dec 09 16:30:41 compute-0 ceph-mon[75222]: pgmap v947: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:30:41 compute-0 podman[250931]: 2025-12-09 16:30:41.740114271 +0000 UTC m=+1.098640614 container remove 7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_vaughan, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 09 16:30:41 compute-0 systemd[1]: libpod-conmon-7afb5c9a10ae36eca1b28a6b15a49296bc82eba7c4a2b77cef4f5b1bcfd6ffa5.scope: Deactivated successfully.
Dec 09 16:30:42 compute-0 podman[250972]: 2025-12-09 16:30:41.919211074 +0000 UTC m=+0.023208671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:30:42 compute-0 podman[250972]: 2025-12-09 16:30:42.048531122 +0000 UTC m=+0.152528629 container create dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:30:42 compute-0 systemd[1]: Started libpod-conmon-dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d.scope.
Dec 09 16:30:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598db4b50db363a4cd12d64a3e23956892a0b6057fd36a9ac26d8eeef1a4038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598db4b50db363a4cd12d64a3e23956892a0b6057fd36a9ac26d8eeef1a4038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598db4b50db363a4cd12d64a3e23956892a0b6057fd36a9ac26d8eeef1a4038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598db4b50db363a4cd12d64a3e23956892a0b6057fd36a9ac26d8eeef1a4038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:30:42 compute-0 podman[250972]: 2025-12-09 16:30:42.144697257 +0000 UTC m=+0.248694774 container init dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:30:42 compute-0 podman[250972]: 2025-12-09 16:30:42.151796558 +0000 UTC m=+0.255794065 container start dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:30:42 compute-0 podman[250972]: 2025-12-09 16:30:42.155122833 +0000 UTC m=+0.259120350 container attach dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_thompson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:30:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:30:42 compute-0 lvm[251067]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:30:42 compute-0 lvm[251068]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:30:42 compute-0 lvm[251067]: VG ceph_vg0 finished
Dec 09 16:30:42 compute-0 lvm[251068]: VG ceph_vg1 finished
Dec 09 16:30:42 compute-0 lvm[251070]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:30:42 compute-0 lvm[251070]: VG ceph_vg2 finished
Dec 09 16:30:42 compute-0 funny_thompson[250989]: {}
Dec 09 16:30:42 compute-0 systemd[1]: libpod-dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d.scope: Deactivated successfully.
Dec 09 16:30:42 compute-0 systemd[1]: libpod-dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d.scope: Consumed 1.380s CPU time.
Dec 09 16:30:42 compute-0 podman[250972]: 2025-12-09 16:30:42.985689454 +0000 UTC m=+1.089686951 container died dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_thompson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2598db4b50db363a4cd12d64a3e23956892a0b6057fd36a9ac26d8eeef1a4038-merged.mount: Deactivated successfully.
Dec 09 16:30:43 compute-0 podman[250972]: 2025-12-09 16:30:43.150558552 +0000 UTC m=+1.254556069 container remove dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_thompson, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:30:43 compute-0 systemd[1]: libpod-conmon-dc29d46c79f71ccdf7c73ce83734f469220574ffde8d5a51fb17cd0f98e16a3d.scope: Deactivated successfully.
Dec 09 16:30:43 compute-0 sudo[250895]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:30:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:30:43 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:30:43 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:30:43 compute-0 sudo[251086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:30:43 compute-0 sudo[251086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:30:43 compute-0 sudo[251086]: pam_unix(sudo:session): session closed for user root
Dec 09 16:30:43 compute-0 ceph-mon[75222]: pgmap v948: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:30:43 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:30:43 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:30:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:30:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:45 compute-0 ceph-mon[75222]: pgmap v949: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:30:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Dec 09 16:30:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 09 16:30:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 09 16:30:46 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 09 16:30:47 compute-0 ceph-mon[75222]: pgmap v950: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Dec 09 16:30:47 compute-0 ceph-mon[75222]: osdmap e140: 3 total, 3 up, 3 in
Dec 09 16:30:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:30:49 compute-0 ceph-mon[75222]: pgmap v952: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:30:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:30:50 compute-0 podman[251112]: 2025-12-09 16:30:50.64696783 +0000 UTC m=+0.082279620 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 09 16:30:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:51 compute-0 ceph-mon[75222]: pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:30:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:30:53 compute-0 ceph-mon[75222]: pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:30:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:30:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:30:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 09 16:30:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 09 16:30:55 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 09 16:30:55 compute-0 ceph-mon[75222]: pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:30:55 compute-0 ceph-mon[75222]: osdmap e141: 3 total, 3 up, 3 in
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.086 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.087 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.087 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.087 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.088 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:30:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:30:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:30:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:30:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:30:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:30:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:30:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Dec 09 16:30:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:30:56 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086941958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.622 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.801 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.803 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5109MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.803 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.803 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.873 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.874 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:30:56 compute-0 nova_compute[243452]: 2025-12-09 16:30:56.909 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:30:56 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4086941958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:30:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:30:57 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229162375' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:30:57 compute-0 nova_compute[243452]: 2025-12-09 16:30:57.489 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:30:57 compute-0 nova_compute[243452]: 2025-12-09 16:30:57.497 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:30:57 compute-0 nova_compute[243452]: 2025-12-09 16:30:57.520 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:30:57 compute-0 nova_compute[243452]: 2025-12-09 16:30:57.524 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:30:57 compute-0 nova_compute[243452]: 2025-12-09 16:30:57.524 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:30:57 compute-0 ceph-mon[75222]: pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Dec 09 16:30:57 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2229162375' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:30:58 compute-0 nova_compute[243452]: 2025-12-09 16:30:58.525 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:30:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:30:59 compute-0 ceph-mon[75222]: pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:31:00 compute-0 nova_compute[243452]: 2025-12-09 16:31:00.047 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:01 compute-0 nova_compute[243452]: 2025-12-09 16:31:01.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:01 compute-0 nova_compute[243452]: 2025-12-09 16:31:01.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:31:01 compute-0 nova_compute[243452]: 2025-12-09 16:31:01.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:31:01 compute-0 nova_compute[243452]: 2025-12-09 16:31:01.162 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:31:01 compute-0 nova_compute[243452]: 2025-12-09 16:31:01.162 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:01 compute-0 nova_compute[243452]: 2025-12-09 16:31:01.163 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:01 compute-0 nova_compute[243452]: 2025-12-09 16:31:01.163 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:01 compute-0 nova_compute[243452]: 2025-12-09 16:31:01.163 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:31:01 compute-0 ceph-mon[75222]: pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:03 compute-0 nova_compute[243452]: 2025-12-09 16:31:03.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:03 compute-0 ceph-mon[75222]: pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:05 compute-0 nova_compute[243452]: 2025-12-09 16:31:05.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:05 compute-0 podman[251178]: 2025-12-09 16:31:05.624645296 +0000 UTC m=+0.063441275 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 16:31:05 compute-0 podman[251177]: 2025-12-09 16:31:05.659046215 +0000 UTC m=+0.102933419 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 09 16:31:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:05 compute-0 ceph-mon[75222]: pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:08 compute-0 ceph-mon[75222]: pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:10 compute-0 ceph-mon[75222]: pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:31:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3402228930' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:31:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:31:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3402228930' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:31:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3402228930' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:31:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3402228930' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:31:12 compute-0 ceph-mon[75222]: pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:14 compute-0 ceph-mon[75222]: pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:16 compute-0 ceph-mon[75222]: pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:31:17.850 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:31:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:31:17.851 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:31:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:31:17.851 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:31:18 compute-0 ceph-mon[75222]: pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:19 compute-0 sshd-session[251223]: Invalid user test from 146.190.31.45 port 39880
Dec 09 16:31:19 compute-0 sshd-session[251223]: Connection closed by invalid user test 146.190.31.45 port 39880 [preauth]
Dec 09 16:31:20 compute-0 ceph-mon[75222]: pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:21 compute-0 podman[251225]: 2025-12-09 16:31:21.639555447 +0000 UTC m=+0.084291211 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:31:22 compute-0 ceph-mon[75222]: pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:23 compute-0 ceph-mon[75222]: pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:25 compute-0 ceph-mon[75222]: pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:31:25
Dec 09 16:31:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:31:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:31:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr']
Dec 09 16:31:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:31:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:31:27 compute-0 ceph-mon[75222]: pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:29 compute-0 ceph-mon[75222]: pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:31 compute-0 ceph-mon[75222]: pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:33 compute-0 ceph-mon[75222]: pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:35 compute-0 ceph-mon[75222]: pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:36 compute-0 podman[251246]: 2025-12-09 16:31:36.614571263 +0000 UTC m=+0.058977961 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 16:31:36 compute-0 podman[251245]: 2025-12-09 16:31:36.640766657 +0000 UTC m=+0.088930499 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.5328861014749993e-07 of space, bias 1.0, pg target 0.00010598658304424998 quantized to 32 (current 32)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1292019567274135e-06 of space, bias 4.0, pg target 0.002555042348072896 quantized to 16 (current 16)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:31:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:31:37 compute-0 ceph-mon[75222]: pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:39 compute-0 ceph-mon[75222]: pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:41 compute-0 ceph-mon[75222]: pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:43 compute-0 sudo[251291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:31:43 compute-0 sudo[251291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:43 compute-0 sudo[251291]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:43 compute-0 sudo[251316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:31:43 compute-0 sudo[251316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:43 compute-0 ceph-mon[75222]: pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:44 compute-0 sudo[251316]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:31:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:31:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:31:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:31:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:31:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:31:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:31:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:31:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:31:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:31:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:31:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:31:44 compute-0 sudo[251374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:31:44 compute-0 sudo[251374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:44 compute-0 sudo[251374]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:44 compute-0 sudo[251399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:31:44 compute-0 sudo[251399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:44 compute-0 podman[251437]: 2025-12-09 16:31:44.806116129 +0000 UTC m=+0.048116222 container create f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:31:44 compute-0 systemd[1]: Started libpod-conmon-f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4.scope.
Dec 09 16:31:44 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:31:44 compute-0 podman[251437]: 2025-12-09 16:31:44.78625236 +0000 UTC m=+0.028252473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:31:44 compute-0 podman[251437]: 2025-12-09 16:31:44.893868224 +0000 UTC m=+0.135868337 container init f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:31:44 compute-0 podman[251437]: 2025-12-09 16:31:44.900661082 +0000 UTC m=+0.142661175 container start f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:31:44 compute-0 podman[251437]: 2025-12-09 16:31:44.903572682 +0000 UTC m=+0.145572825 container attach f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:31:44 compute-0 jovial_faraday[251453]: 167 167
Dec 09 16:31:44 compute-0 systemd[1]: libpod-f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4.scope: Deactivated successfully.
Dec 09 16:31:44 compute-0 podman[251437]: 2025-12-09 16:31:44.906430501 +0000 UTC m=+0.148430624 container died f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ee5fa65a61f8a84a90f642153503478e7afb4b04f1c281a9746cab0f7fcee7c-merged.mount: Deactivated successfully.
Dec 09 16:31:44 compute-0 podman[251437]: 2025-12-09 16:31:44.944070421 +0000 UTC m=+0.186070514 container remove f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:31:44 compute-0 systemd[1]: libpod-conmon-f25172d96947798ac17aac8e79c1d3b3ecbfffd95aa13d80a8e7ab4866a0e1a4.scope: Deactivated successfully.
Dec 09 16:31:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:31:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:31:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:31:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:31:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:31:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:31:45 compute-0 podman[251476]: 2025-12-09 16:31:45.131713297 +0000 UTC m=+0.050867777 container create a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_saha, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 16:31:45 compute-0 systemd[1]: Started libpod-conmon-a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c.scope.
Dec 09 16:31:45 compute-0 podman[251476]: 2025-12-09 16:31:45.107658352 +0000 UTC m=+0.026812842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:31:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd69c2e8f72fc6bd293b6b0486d65db84d12989df58f8e4b2efc2d7135b4694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd69c2e8f72fc6bd293b6b0486d65db84d12989df58f8e4b2efc2d7135b4694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd69c2e8f72fc6bd293b6b0486d65db84d12989df58f8e4b2efc2d7135b4694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd69c2e8f72fc6bd293b6b0486d65db84d12989df58f8e4b2efc2d7135b4694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd69c2e8f72fc6bd293b6b0486d65db84d12989df58f8e4b2efc2d7135b4694/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:45 compute-0 podman[251476]: 2025-12-09 16:31:45.218461864 +0000 UTC m=+0.137616354 container init a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:31:45 compute-0 podman[251476]: 2025-12-09 16:31:45.227765081 +0000 UTC m=+0.146919571 container start a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_saha, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:31:45 compute-0 podman[251476]: 2025-12-09 16:31:45.231767462 +0000 UTC m=+0.150921952 container attach a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:31:45 compute-0 distracted_saha[251493]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:31:45 compute-0 distracted_saha[251493]: --> All data devices are unavailable
Dec 09 16:31:45 compute-0 systemd[1]: libpod-a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c.scope: Deactivated successfully.
Dec 09 16:31:45 compute-0 podman[251476]: 2025-12-09 16:31:45.731152882 +0000 UTC m=+0.650307392 container died a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_saha, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cd69c2e8f72fc6bd293b6b0486d65db84d12989df58f8e4b2efc2d7135b4694-merged.mount: Deactivated successfully.
Dec 09 16:31:45 compute-0 podman[251476]: 2025-12-09 16:31:45.781215595 +0000 UTC m=+0.700370075 container remove a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_saha, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:31:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:45 compute-0 systemd[1]: libpod-conmon-a500b0d102006a43f7422cd9986a44539f150c89d86917f8378c08227abecc5c.scope: Deactivated successfully.
Dec 09 16:31:45 compute-0 sudo[251399]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:45 compute-0 sudo[251526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:31:45 compute-0 sudo[251526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:45 compute-0 sudo[251526]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:45 compute-0 sudo[251551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:31:45 compute-0 sudo[251551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:45 compute-0 ceph-mon[75222]: pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:46 compute-0 podman[251587]: 2025-12-09 16:31:46.271898155 +0000 UTC m=+0.046962198 container create 9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 09 16:31:46 compute-0 systemd[1]: Started libpod-conmon-9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463.scope.
Dec 09 16:31:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:31:46 compute-0 podman[251587]: 2025-12-09 16:31:46.252293214 +0000 UTC m=+0.027357357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:31:46 compute-0 podman[251587]: 2025-12-09 16:31:46.364173575 +0000 UTC m=+0.139237638 container init 9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_booth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:31:46 compute-0 podman[251587]: 2025-12-09 16:31:46.376550038 +0000 UTC m=+0.151614091 container start 9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:31:46 compute-0 podman[251587]: 2025-12-09 16:31:46.380178058 +0000 UTC m=+0.155242151 container attach 9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_booth, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:31:46 compute-0 busy_booth[251603]: 167 167
Dec 09 16:31:46 compute-0 systemd[1]: libpod-9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463.scope: Deactivated successfully.
Dec 09 16:31:46 compute-0 conmon[251603]: conmon 9183a1405f45eaa2c1c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463.scope/container/memory.events
Dec 09 16:31:46 compute-0 podman[251587]: 2025-12-09 16:31:46.385080023 +0000 UTC m=+0.160144076 container died 9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_booth, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd0d332f5c584070bed9b1d2f2526da3561973ce929bb2f91da35e5124b6072-merged.mount: Deactivated successfully.
Dec 09 16:31:46 compute-0 podman[251587]: 2025-12-09 16:31:46.440502895 +0000 UTC m=+0.215566948 container remove 9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_booth, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:31:46 compute-0 systemd[1]: libpod-conmon-9183a1405f45eaa2c1c7a15d723e39b44ded08443e93f036f36409c75bdc9463.scope: Deactivated successfully.
Dec 09 16:31:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:46 compute-0 podman[251629]: 2025-12-09 16:31:46.622284179 +0000 UTC m=+0.041237391 container create a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:31:46 compute-0 systemd[1]: Started libpod-conmon-a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af.scope.
Dec 09 16:31:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3c50e4209f75142b1ef1571fade2e32e70a0952a272841230dc052ce2ae7ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3c50e4209f75142b1ef1571fade2e32e70a0952a272841230dc052ce2ae7ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3c50e4209f75142b1ef1571fade2e32e70a0952a272841230dc052ce2ae7ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3c50e4209f75142b1ef1571fade2e32e70a0952a272841230dc052ce2ae7ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:46 compute-0 podman[251629]: 2025-12-09 16:31:46.607270454 +0000 UTC m=+0.026223686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:31:46 compute-0 podman[251629]: 2025-12-09 16:31:46.704532632 +0000 UTC m=+0.123485874 container init a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:31:46 compute-0 podman[251629]: 2025-12-09 16:31:46.712344827 +0000 UTC m=+0.131298069 container start a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 09 16:31:46 compute-0 podman[251629]: 2025-12-09 16:31:46.716365179 +0000 UTC m=+0.135318431 container attach a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hodgkin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]: {
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:     "0": [
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:         {
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "devices": [
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "/dev/loop3"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             ],
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_name": "ceph_lv0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_size": "21470642176",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "name": "ceph_lv0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "tags": {
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cluster_name": "ceph",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.crush_device_class": "",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.encrypted": "0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.objectstore": "bluestore",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osd_id": "0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.type": "block",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.vdo": "0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.with_tpm": "0"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             },
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "type": "block",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "vg_name": "ceph_vg0"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:         }
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:     ],
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:     "1": [
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:         {
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "devices": [
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "/dev/loop4"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             ],
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_name": "ceph_lv1",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_size": "21470642176",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "name": "ceph_lv1",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "tags": {
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cluster_name": "ceph",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.crush_device_class": "",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.encrypted": "0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.objectstore": "bluestore",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osd_id": "1",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.type": "block",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.vdo": "0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.with_tpm": "0"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             },
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "type": "block",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "vg_name": "ceph_vg1"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:         }
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:     ],
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:     "2": [
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:         {
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "devices": [
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "/dev/loop5"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             ],
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_name": "ceph_lv2",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_size": "21470642176",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "name": "ceph_lv2",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "tags": {
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.cluster_name": "ceph",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.crush_device_class": "",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.encrypted": "0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.objectstore": "bluestore",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osd_id": "2",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.type": "block",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.vdo": "0",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:                 "ceph.with_tpm": "0"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             },
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "type": "block",
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:             "vg_name": "ceph_vg2"
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:         }
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]:     ]
Dec 09 16:31:46 compute-0 sharp_hodgkin[251647]: }
Dec 09 16:31:47 compute-0 systemd[1]: libpod-a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af.scope: Deactivated successfully.
Dec 09 16:31:47 compute-0 podman[251629]: 2025-12-09 16:31:47.02797061 +0000 UTC m=+0.446923852 container died a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae3c50e4209f75142b1ef1571fade2e32e70a0952a272841230dc052ce2ae7ea-merged.mount: Deactivated successfully.
Dec 09 16:31:47 compute-0 podman[251629]: 2025-12-09 16:31:47.077960101 +0000 UTC m=+0.496913353 container remove a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hodgkin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:31:47 compute-0 systemd[1]: libpod-conmon-a422763a8c437c03327a11666f772f4c8465bad421dae946bb2155feacc023af.scope: Deactivated successfully.
Dec 09 16:31:47 compute-0 sudo[251551]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:47 compute-0 sudo[251669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:31:47 compute-0 sudo[251669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:47 compute-0 sudo[251669]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:47 compute-0 sudo[251694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:31:47 compute-0 sudo[251694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:47 compute-0 podman[251733]: 2025-12-09 16:31:47.632114226 +0000 UTC m=+0.066829118 container create 44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:31:47 compute-0 systemd[1]: Started libpod-conmon-44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f.scope.
Dec 09 16:31:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:31:47 compute-0 podman[251733]: 2025-12-09 16:31:47.605178841 +0000 UTC m=+0.039893793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:31:47 compute-0 podman[251733]: 2025-12-09 16:31:47.711032047 +0000 UTC m=+0.145746939 container init 44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:31:47 compute-0 podman[251733]: 2025-12-09 16:31:47.718880904 +0000 UTC m=+0.153595756 container start 44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:31:47 compute-0 podman[251733]: 2025-12-09 16:31:47.722130603 +0000 UTC m=+0.156845495 container attach 44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:31:47 compute-0 epic_keller[251749]: 167 167
Dec 09 16:31:47 compute-0 systemd[1]: libpod-44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f.scope: Deactivated successfully.
Dec 09 16:31:47 compute-0 podman[251733]: 2025-12-09 16:31:47.725000883 +0000 UTC m=+0.159715745 container died 44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c234cabe1c0809f39ec36d0b439514587fa7e07b72bb7ca27c79531e6bb9a6-merged.mount: Deactivated successfully.
Dec 09 16:31:47 compute-0 podman[251733]: 2025-12-09 16:31:47.905700236 +0000 UTC m=+0.340415088 container remove 44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 16:31:47 compute-0 systemd[1]: libpod-conmon-44e7201f17b3c8b60cdd52942a61e996ed0ce11df84b12e94390fc8dc8660a4f.scope: Deactivated successfully.
Dec 09 16:31:48 compute-0 ceph-mon[75222]: pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:48 compute-0 podman[251773]: 2025-12-09 16:31:48.114146477 +0000 UTC m=+0.047591916 container create 37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:31:48 compute-0 systemd[1]: Started libpod-conmon-37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d.scope.
Dec 09 16:31:48 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c00f9231568d6736ecfd671a363dd04587b9b97761b0aebe9cd9b2421871906/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c00f9231568d6736ecfd671a363dd04587b9b97761b0aebe9cd9b2421871906/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c00f9231568d6736ecfd671a363dd04587b9b97761b0aebe9cd9b2421871906/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c00f9231568d6736ecfd671a363dd04587b9b97761b0aebe9cd9b2421871906/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:31:48 compute-0 podman[251773]: 2025-12-09 16:31:48.182358842 +0000 UTC m=+0.115804291 container init 37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_turing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:31:48 compute-0 podman[251773]: 2025-12-09 16:31:48.187378281 +0000 UTC m=+0.120823710 container start 37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:31:48 compute-0 podman[251773]: 2025-12-09 16:31:48.099759549 +0000 UTC m=+0.033205008 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:31:48 compute-0 podman[251773]: 2025-12-09 16:31:48.190940879 +0000 UTC m=+0.124386358 container attach 37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_turing, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:31:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:48 compute-0 lvm[251867]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:31:48 compute-0 lvm[251871]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:31:48 compute-0 lvm[251870]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:31:48 compute-0 lvm[251870]: VG ceph_vg1 finished
Dec 09 16:31:48 compute-0 lvm[251871]: VG ceph_vg2 finished
Dec 09 16:31:48 compute-0 lvm[251867]: VG ceph_vg0 finished
Dec 09 16:31:48 compute-0 great_turing[251789]: {}
Dec 09 16:31:48 compute-0 systemd[1]: libpod-37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d.scope: Deactivated successfully.
Dec 09 16:31:48 compute-0 systemd[1]: libpod-37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d.scope: Consumed 1.219s CPU time.
Dec 09 16:31:48 compute-0 podman[251773]: 2025-12-09 16:31:48.950280553 +0000 UTC m=+0.883726022 container died 37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_turing, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c00f9231568d6736ecfd671a363dd04587b9b97761b0aebe9cd9b2421871906-merged.mount: Deactivated successfully.
Dec 09 16:31:48 compute-0 podman[251773]: 2025-12-09 16:31:48.999365949 +0000 UTC m=+0.932811378 container remove 37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:31:49 compute-0 systemd[1]: libpod-conmon-37723081b330ac87ab43212166f7ed735a10ae1310b9f4c079f81513355ad78d.scope: Deactivated successfully.
Dec 09 16:31:49 compute-0 sudo[251694]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:31:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:31:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:31:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:31:49 compute-0 sudo[251888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:31:49 compute-0 sudo[251888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:31:49 compute-0 sudo[251888]: pam_unix(sudo:session): session closed for user root
Dec 09 16:31:50 compute-0 ceph-mon[75222]: pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:31:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:31:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:52 compute-0 ceph-mon[75222]: pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:52 compute-0 podman[251913]: 2025-12-09 16:31:52.682206665 +0000 UTC m=+0.106222836 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Dec 09 16:31:54 compute-0 ceph-mon[75222]: pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:31:56 compute-0 nova_compute[243452]: 2025-12-09 16:31:56.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:56 compute-0 nova_compute[243452]: 2025-12-09 16:31:56.054 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 16:31:56 compute-0 nova_compute[243452]: 2025-12-09 16:31:56.090 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 16:31:56 compute-0 ceph-mon[75222]: pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:31:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:31:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:31:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:31:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:31:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:31:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.090 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.131 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.132 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.132 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.132 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.133 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:31:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:31:57 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792353862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.713 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.890 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.891 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5088MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.891 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:31:57 compute-0 nova_compute[243452]: 2025-12-09 16:31:57.891 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:31:58 compute-0 nova_compute[243452]: 2025-12-09 16:31:58.001 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:31:58 compute-0 nova_compute[243452]: 2025-12-09 16:31:58.001 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:31:58 compute-0 nova_compute[243452]: 2025-12-09 16:31:58.032 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:31:58 compute-0 ceph-mon[75222]: pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:58 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3792353862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:31:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:31:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:31:58 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1199308029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:31:58 compute-0 nova_compute[243452]: 2025-12-09 16:31:58.650 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:31:58 compute-0 nova_compute[243452]: 2025-12-09 16:31:58.655 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:31:58 compute-0 nova_compute[243452]: 2025-12-09 16:31:58.772 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:31:58 compute-0 nova_compute[243452]: 2025-12-09 16:31:58.774 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:31:58 compute-0 nova_compute[243452]: 2025-12-09 16:31:58.775 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:31:59 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1199308029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:31:59 compute-0 sshd-session[251978]: Invalid user test from 146.190.31.45 port 44094
Dec 09 16:31:59 compute-0 sshd-session[251978]: Connection closed by invalid user test 146.190.31.45 port 44094 [preauth]
Dec 09 16:32:00 compute-0 nova_compute[243452]: 2025-12-09 16:32:00.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:00 compute-0 nova_compute[243452]: 2025-12-09 16:32:00.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:00 compute-0 ceph-mon[75222]: pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:01 compute-0 nova_compute[243452]: 2025-12-09 16:32:01.068 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:01 compute-0 nova_compute[243452]: 2025-12-09 16:32:01.068 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:32:01 compute-0 nova_compute[243452]: 2025-12-09 16:32:01.069 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:32:01 compute-0 nova_compute[243452]: 2025-12-09 16:32:01.090 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:32:01 compute-0 nova_compute[243452]: 2025-12-09 16:32:01.090 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:02 compute-0 nova_compute[243452]: 2025-12-09 16:32:02.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:02 compute-0 nova_compute[243452]: 2025-12-09 16:32:02.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:02 compute-0 nova_compute[243452]: 2025-12-09 16:32:02.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:32:02 compute-0 ceph-mon[75222]: pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:03 compute-0 nova_compute[243452]: 2025-12-09 16:32:03.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:04 compute-0 nova_compute[243452]: 2025-12-09 16:32:04.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:04 compute-0 ceph-mon[75222]: pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:05 compute-0 nova_compute[243452]: 2025-12-09 16:32:05.048 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:06 compute-0 nova_compute[243452]: 2025-12-09 16:32:06.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:06 compute-0 nova_compute[243452]: 2025-12-09 16:32:06.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:06 compute-0 nova_compute[243452]: 2025-12-09 16:32:06.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 16:32:06 compute-0 ceph-mon[75222]: pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:07 compute-0 podman[251981]: 2025-12-09 16:32:07.111036423 +0000 UTC m=+0.044878311 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 09 16:32:07 compute-0 podman[251980]: 2025-12-09 16:32:07.150948076 +0000 UTC m=+0.087018896 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 09 16:32:08 compute-0 ceph-mon[75222]: pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:32:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/711053339' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:32:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:32:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/711053339' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:32:10 compute-0 ceph-mon[75222]: pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/711053339' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:32:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/711053339' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:32:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:12 compute-0 ceph-mon[75222]: pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.240877) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297933240915, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1150, "num_deletes": 253, "total_data_size": 1698022, "memory_usage": 1719736, "flush_reason": "Manual Compaction"}
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297933253334, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1670646, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19854, "largest_seqno": 21003, "table_properties": {"data_size": 1665097, "index_size": 2944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12036, "raw_average_key_size": 19, "raw_value_size": 1653814, "raw_average_value_size": 2742, "num_data_blocks": 134, "num_entries": 603, "num_filter_entries": 603, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765297825, "oldest_key_time": 1765297825, "file_creation_time": 1765297933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 12492 microseconds, and 5444 cpu microseconds.
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.253372) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1670646 bytes OK
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.253390) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.254803) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.254821) EVENT_LOG_v1 {"time_micros": 1765297933254815, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.254842) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1692707, prev total WAL file size 1692707, number of live WAL files 2.
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.255856) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1631KB)], [47(7576KB)]
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297933255921, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9429191, "oldest_snapshot_seqno": -1}
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4354 keys, 7664625 bytes, temperature: kUnknown
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297933314678, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7664625, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7634262, "index_size": 18349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 107739, "raw_average_key_size": 24, "raw_value_size": 7554192, "raw_average_value_size": 1735, "num_data_blocks": 767, "num_entries": 4354, "num_filter_entries": 4354, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765297933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.314948) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7664625 bytes
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.316104) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 130.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.4 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.2) write-amplify(4.6) OK, records in: 4875, records dropped: 521 output_compression: NoCompression
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.316129) EVENT_LOG_v1 {"time_micros": 1765297933316118, "job": 24, "event": "compaction_finished", "compaction_time_micros": 58863, "compaction_time_cpu_micros": 31807, "output_level": 6, "num_output_files": 1, "total_output_size": 7664625, "num_input_records": 4875, "num_output_records": 4354, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297933316648, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765297933318502, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.255744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.318600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.318606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.318608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.318609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:32:13 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:32:13.318612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:32:14 compute-0 ceph-mon[75222]: pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:16 compute-0 ceph-mon[75222]: pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:32:17.852 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:32:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:32:17.853 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:32:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:32:17.853 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:32:18 compute-0 ceph-mon[75222]: pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:19 compute-0 ceph-mon[75222]: pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:21 compute-0 ceph-mon[75222]: pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:23 compute-0 podman[252027]: 2025-12-09 16:32:23.637147379 +0000 UTC m=+0.074193312 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 09 16:32:23 compute-0 ceph-mon[75222]: pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:25 compute-0 ceph-mon[75222]: pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:32:25
Dec 09 16:32:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:32:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:32:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'volumes', 'default.rgw.log', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta']
Dec 09 16:32:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:32:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:32:27 compute-0 ceph-mon[75222]: pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:29 compute-0 ceph-mon[75222]: pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:31 compute-0 ceph-mon[75222]: pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:33 compute-0 ceph-mon[75222]: pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:35 compute-0 ceph-mon[75222]: pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.5328861014749993e-07 of space, bias 1.0, pg target 0.00010598658304424998 quantized to 32 (current 32)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1292019567274135e-06 of space, bias 4.0, pg target 0.002555042348072896 quantized to 16 (current 16)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:32:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:32:37 compute-0 podman[252048]: 2025-12-09 16:32:37.620108652 +0000 UTC m=+0.058205829 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:32:37 compute-0 podman[252047]: 2025-12-09 16:32:37.643584251 +0000 UTC m=+0.089382231 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:32:37 compute-0 ceph-mon[75222]: pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:38 compute-0 nova_compute[243452]: 2025-12-09 16:32:38.800 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 09 16:32:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 09 16:32:39 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 09 16:32:39 compute-0 ceph-mon[75222]: pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:32:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Dec 09 16:32:40 compute-0 ceph-mon[75222]: osdmap e142: 3 total, 3 up, 3 in
Dec 09 16:32:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec 09 16:32:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec 09 16:32:41 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec 09 16:32:41 compute-0 ceph-mon[75222]: pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Dec 09 16:32:41 compute-0 sshd-session[252093]: Invalid user test from 146.190.31.45 port 32850
Dec 09 16:32:42 compute-0 sshd-session[252093]: Connection closed by invalid user test 146.190.31.45 port 32850 [preauth]
Dec 09 16:32:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 6 op/s
Dec 09 16:32:42 compute-0 ceph-mon[75222]: osdmap e143: 3 total, 3 up, 3 in
Dec 09 16:32:43 compute-0 ceph-mon[75222]: pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 6 op/s
Dec 09 16:32:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 09 16:32:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:45 compute-0 ceph-mon[75222]: pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 09 16:32:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 09 16:32:47 compute-0 ceph-mon[75222]: pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 09 16:32:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Dec 09 16:32:49 compute-0 sudo[252095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:32:49 compute-0 sudo[252095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:49 compute-0 sudo[252095]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:49 compute-0 sudo[252120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:32:49 compute-0 sudo[252120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:49 compute-0 ceph-mon[75222]: pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Dec 09 16:32:49 compute-0 sudo[252120]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:32:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:32:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:32:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:32:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:32:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:32:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:32:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:32:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:32:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:32:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:32:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:32:50 compute-0 sudo[252176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:32:50 compute-0 sudo[252176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:50 compute-0 sudo[252176]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:50 compute-0 sudo[252201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:32:50 compute-0 sudo[252201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:50 compute-0 podman[252238]: 2025-12-09 16:32:50.340145865 +0000 UTC m=+0.052132712 container create 1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Dec 09 16:32:50 compute-0 systemd[1]: Started libpod-conmon-1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170.scope.
Dec 09 16:32:50 compute-0 podman[252238]: 2025-12-09 16:32:50.317044086 +0000 UTC m=+0.029030983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:32:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:32:50 compute-0 podman[252238]: 2025-12-09 16:32:50.431219782 +0000 UTC m=+0.143206649 container init 1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_franklin, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:32:50 compute-0 podman[252238]: 2025-12-09 16:32:50.43910998 +0000 UTC m=+0.151096827 container start 1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_franklin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:32:50 compute-0 podman[252238]: 2025-12-09 16:32:50.442641667 +0000 UTC m=+0.154628544 container attach 1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_franklin, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:32:50 compute-0 boring_franklin[252254]: 167 167
Dec 09 16:32:50 compute-0 systemd[1]: libpod-1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170.scope: Deactivated successfully.
Dec 09 16:32:50 compute-0 conmon[252254]: conmon 1424befa4cc122f802ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170.scope/container/memory.events
Dec 09 16:32:50 compute-0 podman[252238]: 2025-12-09 16:32:50.449216259 +0000 UTC m=+0.161203116 container died 1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:32:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8157cf437da4b2aac1c92f9bcd6e412f82b2856fe1361571f76734ad36ba5346-merged.mount: Deactivated successfully.
Dec 09 16:32:50 compute-0 podman[252238]: 2025-12-09 16:32:50.492080924 +0000 UTC m=+0.204067771 container remove 1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_franklin, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:32:50 compute-0 systemd[1]: libpod-conmon-1424befa4cc122f802eff1c65283f6cc51a81d57f041e2cfc7ca7d7275328170.scope: Deactivated successfully.
Dec 09 16:32:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Dec 09 16:32:50 compute-0 podman[252277]: 2025-12-09 16:32:50.668696754 +0000 UTC m=+0.053693814 container create ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Dec 09 16:32:50 compute-0 systemd[1]: Started libpod-conmon-ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05.scope.
Dec 09 16:32:50 compute-0 podman[252277]: 2025-12-09 16:32:50.64319212 +0000 UTC m=+0.028189200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:32:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ecb945641a8c2920813d2fb7e93c6c3e7ca6d48a778b6265008353bb831ca2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ecb945641a8c2920813d2fb7e93c6c3e7ca6d48a778b6265008353bb831ca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ecb945641a8c2920813d2fb7e93c6c3e7ca6d48a778b6265008353bb831ca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ecb945641a8c2920813d2fb7e93c6c3e7ca6d48a778b6265008353bb831ca2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ecb945641a8c2920813d2fb7e93c6c3e7ca6d48a778b6265008353bb831ca2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:50 compute-0 podman[252277]: 2025-12-09 16:32:50.770701313 +0000 UTC m=+0.155698393 container init ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:32:50 compute-0 podman[252277]: 2025-12-09 16:32:50.77961493 +0000 UTC m=+0.164611990 container start ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_antonelli, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:32:50 compute-0 podman[252277]: 2025-12-09 16:32:50.783382664 +0000 UTC m=+0.168379824 container attach ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:32:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:32:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:32:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:32:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:32:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:32:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:32:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec 09 16:32:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec 09 16:32:50 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec 09 16:32:51 compute-0 distracted_antonelli[252293]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:32:51 compute-0 distracted_antonelli[252293]: --> All data devices are unavailable
Dec 09 16:32:51 compute-0 systemd[1]: libpod-ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05.scope: Deactivated successfully.
Dec 09 16:32:51 compute-0 podman[252277]: 2025-12-09 16:32:51.339513313 +0000 UTC m=+0.724510373 container died ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:32:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-91ecb945641a8c2920813d2fb7e93c6c3e7ca6d48a778b6265008353bb831ca2-merged.mount: Deactivated successfully.
Dec 09 16:32:51 compute-0 podman[252277]: 2025-12-09 16:32:51.386208373 +0000 UTC m=+0.771205463 container remove ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:32:51 compute-0 systemd[1]: libpod-conmon-ecbac32a79bdef7b67e4d0755382000a85f9ce776a0e67c75a9bcffc4a1b5b05.scope: Deactivated successfully.
Dec 09 16:32:51 compute-0 sudo[252201]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:51 compute-0 sudo[252328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:32:51 compute-0 sudo[252328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:51 compute-0 sudo[252328]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:51 compute-0 sudo[252353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:32:51 compute-0 sudo[252353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:51 compute-0 podman[252390]: 2025-12-09 16:32:51.905395441 +0000 UTC m=+0.056415470 container create e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:32:51 compute-0 systemd[1]: Started libpod-conmon-e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc.scope.
Dec 09 16:32:51 compute-0 ceph-mon[75222]: pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Dec 09 16:32:51 compute-0 ceph-mon[75222]: osdmap e144: 3 total, 3 up, 3 in
Dec 09 16:32:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:32:51 compute-0 podman[252390]: 2025-12-09 16:32:51.88509436 +0000 UTC m=+0.036114419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:32:51 compute-0 podman[252390]: 2025-12-09 16:32:51.980388144 +0000 UTC m=+0.131408163 container init e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wilson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:32:51 compute-0 podman[252390]: 2025-12-09 16:32:51.985594998 +0000 UTC m=+0.136615017 container start e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 09 16:32:51 compute-0 podman[252390]: 2025-12-09 16:32:51.989248909 +0000 UTC m=+0.140268948 container attach e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wilson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:32:51 compute-0 epic_wilson[252406]: 167 167
Dec 09 16:32:51 compute-0 systemd[1]: libpod-e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc.scope: Deactivated successfully.
Dec 09 16:32:51 compute-0 conmon[252406]: conmon e9aa50d44784ea247861 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc.scope/container/memory.events
Dec 09 16:32:51 compute-0 podman[252390]: 2025-12-09 16:32:51.993475765 +0000 UTC m=+0.144495774 container died e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eea22bc69baf9d028683a508e5c853751107ef3091f9baf0c504e3a924e9b2d-merged.mount: Deactivated successfully.
Dec 09 16:32:52 compute-0 podman[252390]: 2025-12-09 16:32:52.024867163 +0000 UTC m=+0.175887182 container remove e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:32:52 compute-0 systemd[1]: libpod-conmon-e9aa50d44784ea247861cd0057ad30f0589ac2dc1d52d06e3cd0185ca5fcadcc.scope: Deactivated successfully.
Dec 09 16:32:52 compute-0 podman[252430]: 2025-12-09 16:32:52.22343022 +0000 UTC m=+0.055510325 container create 6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:32:52 compute-0 systemd[1]: Started libpod-conmon-6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334.scope.
Dec 09 16:32:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7ee1d19d3db043f5eab26c947706c7428d50c1d076db937bc9783ec2e8cab3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7ee1d19d3db043f5eab26c947706c7428d50c1d076db937bc9783ec2e8cab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7ee1d19d3db043f5eab26c947706c7428d50c1d076db937bc9783ec2e8cab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7ee1d19d3db043f5eab26c947706c7428d50c1d076db937bc9783ec2e8cab3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:52 compute-0 podman[252430]: 2025-12-09 16:32:52.201225797 +0000 UTC m=+0.033305922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:32:52 compute-0 podman[252430]: 2025-12-09 16:32:52.301302232 +0000 UTC m=+0.133382367 container init 6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:32:52 compute-0 podman[252430]: 2025-12-09 16:32:52.307143894 +0000 UTC m=+0.139224039 container start 6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:32:52 compute-0 podman[252430]: 2025-12-09 16:32:52.310747483 +0000 UTC m=+0.142827608 container attach 6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]: {
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:     "0": [
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:         {
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "devices": [
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "/dev/loop3"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             ],
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_name": "ceph_lv0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_size": "21470642176",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "name": "ceph_lv0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "tags": {
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cluster_name": "ceph",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.crush_device_class": "",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.encrypted": "0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.objectstore": "bluestore",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osd_id": "0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.type": "block",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.vdo": "0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.with_tpm": "0"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             },
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "type": "block",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "vg_name": "ceph_vg0"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:         }
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:     ],
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:     "1": [
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:         {
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "devices": [
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "/dev/loop4"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             ],
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_name": "ceph_lv1",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_size": "21470642176",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "name": "ceph_lv1",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "tags": {
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cluster_name": "ceph",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.crush_device_class": "",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.encrypted": "0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.objectstore": "bluestore",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osd_id": "1",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.type": "block",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.vdo": "0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.with_tpm": "0"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             },
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "type": "block",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "vg_name": "ceph_vg1"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:         }
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:     ],
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:     "2": [
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:         {
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "devices": [
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "/dev/loop5"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             ],
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_name": "ceph_lv2",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_size": "21470642176",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "name": "ceph_lv2",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "tags": {
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.cluster_name": "ceph",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.crush_device_class": "",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.encrypted": "0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.objectstore": "bluestore",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osd_id": "2",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.type": "block",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.vdo": "0",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:                 "ceph.with_tpm": "0"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             },
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "type": "block",
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:             "vg_name": "ceph_vg2"
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:         }
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]:     ]
Dec 09 16:32:52 compute-0 quizzical_ardinghelli[252446]: }
Dec 09 16:32:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Dec 09 16:32:52 compute-0 systemd[1]: libpod-6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334.scope: Deactivated successfully.
Dec 09 16:32:52 compute-0 podman[252430]: 2025-12-09 16:32:52.629154543 +0000 UTC m=+0.461234648 container died 6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f7ee1d19d3db043f5eab26c947706c7428d50c1d076db937bc9783ec2e8cab3-merged.mount: Deactivated successfully.
Dec 09 16:32:52 compute-0 podman[252430]: 2025-12-09 16:32:52.672904742 +0000 UTC m=+0.504984867 container remove 6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:32:52 compute-0 systemd[1]: libpod-conmon-6203a74dae92ebf5a515267953b50afee0ffbfc981758f97f442f781814e1334.scope: Deactivated successfully.
Dec 09 16:32:52 compute-0 sudo[252353]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:52 compute-0 sudo[252468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:32:52 compute-0 sudo[252468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:52 compute-0 sudo[252468]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:52 compute-0 sudo[252493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:32:52 compute-0 sudo[252493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:53 compute-0 podman[252530]: 2025-12-09 16:32:53.11720754 +0000 UTC m=+0.038009721 container create 992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:32:53 compute-0 systemd[1]: Started libpod-conmon-992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47.scope.
Dec 09 16:32:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:32:53 compute-0 podman[252530]: 2025-12-09 16:32:53.191199935 +0000 UTC m=+0.112002106 container init 992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mendeleev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 09 16:32:53 compute-0 podman[252530]: 2025-12-09 16:32:53.099575673 +0000 UTC m=+0.020377834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:32:53 compute-0 podman[252530]: 2025-12-09 16:32:53.197993723 +0000 UTC m=+0.118795884 container start 992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mendeleev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:32:53 compute-0 podman[252530]: 2025-12-09 16:32:53.201596343 +0000 UTC m=+0.122398544 container attach 992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mendeleev, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:32:53 compute-0 systemd[1]: libpod-992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47.scope: Deactivated successfully.
Dec 09 16:32:53 compute-0 focused_mendeleev[252547]: 167 167
Dec 09 16:32:53 compute-0 conmon[252547]: conmon 992462dc8f59e2a6821c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47.scope/container/memory.events
Dec 09 16:32:53 compute-0 podman[252530]: 2025-12-09 16:32:53.203899596 +0000 UTC m=+0.124701747 container died 992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mendeleev, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:32:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-71f9342a651cf2595e7860d6f19b5211e847862606df92129e9d25cc2418f1be-merged.mount: Deactivated successfully.
Dec 09 16:32:53 compute-0 podman[252530]: 2025-12-09 16:32:53.241875046 +0000 UTC m=+0.162677237 container remove 992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mendeleev, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:32:53 compute-0 systemd[1]: libpod-conmon-992462dc8f59e2a6821c4420baf6410208ed80bea0a3a4ac2899da9ca5b05f47.scope: Deactivated successfully.
Dec 09 16:32:53 compute-0 podman[252571]: 2025-12-09 16:32:53.447781435 +0000 UTC m=+0.070198151 container create eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:32:53 compute-0 systemd[1]: Started libpod-conmon-eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f.scope.
Dec 09 16:32:53 compute-0 podman[252571]: 2025-12-09 16:32:53.420673286 +0000 UTC m=+0.043090062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:32:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92cda37f3b98af2c7b78396b492dde473e46e286a64728d1269e7e1254c37be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92cda37f3b98af2c7b78396b492dde473e46e286a64728d1269e7e1254c37be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92cda37f3b98af2c7b78396b492dde473e46e286a64728d1269e7e1254c37be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92cda37f3b98af2c7b78396b492dde473e46e286a64728d1269e7e1254c37be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:32:53 compute-0 podman[252571]: 2025-12-09 16:32:53.559818721 +0000 UTC m=+0.182235477 container init eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:32:53 compute-0 podman[252571]: 2025-12-09 16:32:53.57388357 +0000 UTC m=+0.196300256 container start eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:32:53 compute-0 podman[252571]: 2025-12-09 16:32:53.577896421 +0000 UTC m=+0.200313177 container attach eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:32:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec 09 16:32:53 compute-0 ceph-mon[75222]: pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Dec 09 16:32:53 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec 09 16:32:53 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 09 16:32:54 compute-0 lvm[252673]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:32:54 compute-0 lvm[252673]: VG ceph_vg0 finished
Dec 09 16:32:54 compute-0 lvm[252674]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:32:54 compute-0 lvm[252674]: VG ceph_vg1 finished
Dec 09 16:32:54 compute-0 lvm[252681]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:32:54 compute-0 lvm[252681]: VG ceph_vg2 finished
Dec 09 16:32:54 compute-0 podman[252664]: 2025-12-09 16:32:54.35694525 +0000 UTC m=+0.066117768 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 16:32:54 compute-0 busy_cohen[252588]: {}
Dec 09 16:32:54 compute-0 systemd[1]: libpod-eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f.scope: Deactivated successfully.
Dec 09 16:32:54 compute-0 systemd[1]: libpod-eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f.scope: Consumed 1.416s CPU time.
Dec 09 16:32:54 compute-0 podman[252692]: 2025-12-09 16:32:54.534095146 +0000 UTC m=+0.044898752 container died eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c92cda37f3b98af2c7b78396b492dde473e46e286a64728d1269e7e1254c37be-merged.mount: Deactivated successfully.
Dec 09 16:32:54 compute-0 podman[252692]: 2025-12-09 16:32:54.575047078 +0000 UTC m=+0.085850654 container remove eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:32:54 compute-0 systemd[1]: libpod-conmon-eeedbe791aee2fe5023767c937671b78fb9cae4ff7877ffd55bb1ff946521b2f.scope: Deactivated successfully.
Dec 09 16:32:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 13 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Dec 09 16:32:54 compute-0 sudo[252493]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:32:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:32:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:32:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:32:54 compute-0 sudo[252707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:32:54 compute-0 sudo[252707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:32:54 compute-0 sudo[252707]: pam_unix(sudo:session): session closed for user root
Dec 09 16:32:54 compute-0 ceph-mon[75222]: osdmap e145: 3 total, 3 up, 3 in
Dec 09 16:32:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:32:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:32:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:32:56 compute-0 ceph-mon[75222]: pgmap v1020: 305 pgs: 305 active+clean; 13 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Dec 09 16:32:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:32:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:32:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:32:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:32:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:32:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:32:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 13 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.084 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.084 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.084 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.085 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.085 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:32:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:32:57 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947773181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.632 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.788 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.789 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5101MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.789 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.790 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.914 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:32:57 compute-0 nova_compute[243452]: 2025-12-09 16:32:57.914 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:32:58 compute-0 ceph-mon[75222]: pgmap v1021: 305 pgs: 305 active+clean; 13 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Dec 09 16:32:58 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2947773181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.024 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing inventories for resource provider ca130087-db63-46e1-b278-a80bb66e6865 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.094 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Updating ProviderTree inventory for provider ca130087-db63-46e1-b278-a80bb66e6865 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.094 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Updating inventory in ProviderTree for provider ca130087-db63-46e1-b278-a80bb66e6865 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.119 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing aggregate associations for resource provider ca130087-db63-46e1-b278-a80bb66e6865, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.143 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing trait associations for resource provider ca130087-db63-46e1-b278-a80bb66e6865, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.163 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:32:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 13 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Dec 09 16:32:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:32:58 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1718960985' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.728 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.734 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.762 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.764 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:32:58 compute-0 nova_compute[243452]: 2025-12-09 16:32:58.764 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:32:59 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1718960985' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:33:00 compute-0 ceph-mon[75222]: pgmap v1022: 305 pgs: 305 active+clean; 13 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Dec 09 16:33:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.9 KiB/s wr, 52 op/s
Dec 09 16:33:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec 09 16:33:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec 09 16:33:00 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec 09 16:33:01 compute-0 nova_compute[243452]: 2025-12-09 16:33:01.765 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:01 compute-0 nova_compute[243452]: 2025-12-09 16:33:01.765 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:33:01 compute-0 nova_compute[243452]: 2025-12-09 16:33:01.766 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:33:01 compute-0 nova_compute[243452]: 2025-12-09 16:33:01.811 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:33:01 compute-0 nova_compute[243452]: 2025-12-09 16:33:01.812 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:01 compute-0 ceph-mon[75222]: pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.9 KiB/s wr, 52 op/s
Dec 09 16:33:01 compute-0 ceph-mon[75222]: osdmap e146: 3 total, 3 up, 3 in
Dec 09 16:33:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.5 KiB/s wr, 36 op/s
Dec 09 16:33:03 compute-0 nova_compute[243452]: 2025-12-09 16:33:03.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:03 compute-0 ceph-mon[75222]: pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.5 KiB/s wr, 36 op/s
Dec 09 16:33:04 compute-0 nova_compute[243452]: 2025-12-09 16:33:04.047 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:04 compute-0 nova_compute[243452]: 2025-12-09 16:33:04.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:04 compute-0 nova_compute[243452]: 2025-12-09 16:33:04.054 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:33:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:33:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4779 writes, 21K keys, 4779 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4779 writes, 4779 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1378 writes, 6201 keys, 1378 commit groups, 1.0 writes per commit group, ingest: 8.92 MB, 0.01 MB/s
                                           Interval WAL: 1378 writes, 1378 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    123.5      0.20              0.08        12    0.017       0      0       0.0       0.0
                                             L6      1/0    7.31 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    160.3    131.8      0.61              0.25        11    0.055     48K   5771       0.0       0.0
                                            Sum      1/0    7.31 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    120.7    129.8      0.81              0.33        23    0.035     48K   5771       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.5    127.6    127.3      0.36              0.14        10    0.036     23K   2583       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    160.3    131.8      0.61              0.25        11    0.055     48K   5771       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.7      0.20              0.08        11    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 0.8 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ad05ef58d0#2 capacity: 304.00 MB usage: 9.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000144 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(566,8.90 MB,2.92828%) FilterBlock(24,142.42 KB,0.0457513%) IndexBlock(24,269.64 KB,0.0866187%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 09 16:33:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Dec 09 16:33:05 compute-0 nova_compute[243452]: 2025-12-09 16:33:05.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:05 compute-0 nova_compute[243452]: 2025-12-09 16:33:05.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec 09 16:33:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec 09 16:33:05 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec 09 16:33:05 compute-0 ceph-mon[75222]: pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Dec 09 16:33:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 767 B/s wr, 23 op/s
Dec 09 16:33:06 compute-0 ceph-mon[75222]: osdmap e147: 3 total, 3 up, 3 in
Dec 09 16:33:07 compute-0 ceph-mon[75222]: pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 767 B/s wr, 23 op/s
Dec 09 16:33:08 compute-0 nova_compute[243452]: 2025-12-09 16:33:08.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:08 compute-0 podman[252777]: 2025-12-09 16:33:08.607992681 +0000 UTC m=+0.052692618 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:33:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:08 compute-0 podman[252776]: 2025-12-09 16:33:08.695028906 +0000 UTC m=+0.142891360 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:33:10 compute-0 ceph-mon[75222]: pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:33:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2692537284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:33:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:33:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2692537284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:33:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Dec 09 16:33:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2692537284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:33:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2692537284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:33:12 compute-0 ceph-mon[75222]: pgmap v1030: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Dec 09 16:33:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:33:14 compute-0 ceph-mon[75222]: pgmap v1031: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:33:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:33:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec 09 16:33:16 compute-0 ceph-mon[75222]: pgmap v1032: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:33:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec 09 16:33:16 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec 09 16:33:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:33:17 compute-0 ceph-mon[75222]: osdmap e148: 3 total, 3 up, 3 in
Dec 09 16:33:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:33:17.853 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:33:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:33:17.854 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:33:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:33:17.854 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:33:18 compute-0 ceph-mon[75222]: pgmap v1034: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:33:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:33:20 compute-0 ceph-mon[75222]: pgmap v1035: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 09 16:33:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:22 compute-0 ceph-mon[75222]: pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:24 compute-0 ceph-mon[75222]: pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:24 compute-0 podman[252820]: 2025-12-09 16:33:24.609551252 +0000 UTC m=+0.056052281 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:33:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:25 compute-0 sshd-session[252840]: Invalid user test from 146.190.31.45 port 60468
Dec 09 16:33:25 compute-0 sshd-session[252840]: Connection closed by invalid user test 146.190.31.45 port 60468 [preauth]
Dec 09 16:33:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec 09 16:33:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec 09 16:33:25 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec 09 16:33:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:33:25
Dec 09 16:33:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:33:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:33:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'images', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root']
Dec 09 16:33:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:33:26 compute-0 ceph-mon[75222]: pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:26 compute-0 ceph-mon[75222]: osdmap e149: 3 total, 3 up, 3 in
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:33:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:33:28 compute-0 ceph-mon[75222]: pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:30 compute-0 ceph-mon[75222]: pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 09 16:33:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:32 compute-0 ceph-mon[75222]: pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:34 compute-0 ceph-mon[75222]: pgmap v1043: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:35 compute-0 ceph-mon[75222]: pgmap v1044: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.427166042984902e-07 of space, bias 1.0, pg target 0.00025281498128954704 quantized to 32 (current 32)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7749663169956723e-06 of space, bias 4.0, pg target 0.0021299595803948067 quantized to 16 (current 16)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:33:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:33:37 compute-0 ceph-mon[75222]: pgmap v1045: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:39 compute-0 podman[252843]: 2025-12-09 16:33:39.606802775 +0000 UTC m=+0.053943042 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 16:33:39 compute-0 podman[252842]: 2025-12-09 16:33:39.632652179 +0000 UTC m=+0.081308068 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:33:39 compute-0 ceph-mon[75222]: pgmap v1046: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:41 compute-0 ceph-mon[75222]: pgmap v1047: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:44 compute-0 ceph-mon[75222]: pgmap v1048: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:46 compute-0 ceph-mon[75222]: pgmap v1049: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:48 compute-0 ceph-mon[75222]: pgmap v1050: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:50 compute-0 ceph-mon[75222]: pgmap v1051: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:52 compute-0 ceph-mon[75222]: pgmap v1052: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:54 compute-0 ceph-mon[75222]: pgmap v1053: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:54 compute-0 sudo[252884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:33:54 compute-0 sudo[252884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:54 compute-0 sudo[252884]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:54 compute-0 sudo[252915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:33:54 compute-0 sudo[252915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:54 compute-0 podman[252908]: 2025-12-09 16:33:54.87483165 +0000 UTC m=+0.067448545 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Dec 09 16:33:55 compute-0 sudo[252915]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:55 compute-0 sudo[252983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:33:55 compute-0 sudo[252983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:55 compute-0 sudo[252983]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:55 compute-0 sudo[253008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 09 16:33:55 compute-0 sudo[253008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:55 compute-0 sudo[253008]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:33:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:33:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:33:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:33:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:33:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:33:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:33:55 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:33:55 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:33:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:33:55 compute-0 sudo[253051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:33:55 compute-0 sudo[253051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:55 compute-0 sudo[253051]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:56 compute-0 sudo[253076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:33:56 compute-0 sudo[253076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:56 compute-0 ceph-mon[75222]: pgmap v1054: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:33:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:33:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:33:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:33:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:33:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:33:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:33:56 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:33:56 compute-0 podman[253113]: 2025-12-09 16:33:56.350941952 +0000 UTC m=+0.062856528 container create 2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_tharp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:33:56 compute-0 systemd[1]: Started libpod-conmon-2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae.scope.
Dec 09 16:33:56 compute-0 podman[253113]: 2025-12-09 16:33:56.322019093 +0000 UTC m=+0.033933689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:33:56 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:33:56 compute-0 podman[253113]: 2025-12-09 16:33:56.442742019 +0000 UTC m=+0.154656595 container init 2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_tharp, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:33:56 compute-0 podman[253113]: 2025-12-09 16:33:56.449607689 +0000 UTC m=+0.161522275 container start 2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:33:56 compute-0 podman[253113]: 2025-12-09 16:33:56.453617429 +0000 UTC m=+0.165532055 container attach 2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:33:56 compute-0 adoring_tharp[253129]: 167 167
Dec 09 16:33:56 compute-0 systemd[1]: libpod-2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae.scope: Deactivated successfully.
Dec 09 16:33:56 compute-0 podman[253113]: 2025-12-09 16:33:56.454519684 +0000 UTC m=+0.166434250 container died 2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_tharp, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe7c637adb1aa4cf3cdc10f927e14d9a84c1abfd9fb25cb702c2260eca547f79-merged.mount: Deactivated successfully.
Dec 09 16:33:56 compute-0 podman[253113]: 2025-12-09 16:33:56.501081721 +0000 UTC m=+0.212996277 container remove 2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_tharp, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:33:56 compute-0 systemd[1]: libpod-conmon-2736e082f0f26819db5ea56129400570e86b88eb2b5c5262cac9e2ccfd8298ae.scope: Deactivated successfully.
Dec 09 16:33:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:33:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:33:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:33:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:33:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:33:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:33:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:56 compute-0 podman[253154]: 2025-12-09 16:33:56.653121213 +0000 UTC m=+0.048831761 container create 911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:33:56 compute-0 systemd[1]: Started libpod-conmon-911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729.scope.
Dec 09 16:33:56 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f81f2b40e2c6f413628178483189e15351bca1da7d224235600e0c02a728f83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:56 compute-0 podman[253154]: 2025-12-09 16:33:56.627595697 +0000 UTC m=+0.023306285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f81f2b40e2c6f413628178483189e15351bca1da7d224235600e0c02a728f83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f81f2b40e2c6f413628178483189e15351bca1da7d224235600e0c02a728f83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f81f2b40e2c6f413628178483189e15351bca1da7d224235600e0c02a728f83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f81f2b40e2c6f413628178483189e15351bca1da7d224235600e0c02a728f83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:56 compute-0 podman[253154]: 2025-12-09 16:33:56.73842907 +0000 UTC m=+0.134139608 container init 911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:33:56 compute-0 podman[253154]: 2025-12-09 16:33:56.753428695 +0000 UTC m=+0.149139243 container start 911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:33:56 compute-0 podman[253154]: 2025-12-09 16:33:56.760628184 +0000 UTC m=+0.156338792 container attach 911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.075 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.076 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.076 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.076 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.076 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:33:57 compute-0 unruffled_shaw[253171]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:33:57 compute-0 unruffled_shaw[253171]: --> All data devices are unavailable
Dec 09 16:33:57 compute-0 systemd[1]: libpod-911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729.scope: Deactivated successfully.
Dec 09 16:33:57 compute-0 podman[253211]: 2025-12-09 16:33:57.270521235 +0000 UTC m=+0.024615531 container died 911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 09 16:33:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f81f2b40e2c6f413628178483189e15351bca1da7d224235600e0c02a728f83-merged.mount: Deactivated successfully.
Dec 09 16:33:57 compute-0 podman[253211]: 2025-12-09 16:33:57.315027795 +0000 UTC m=+0.069122091 container remove 911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:33:57 compute-0 systemd[1]: libpod-conmon-911035aeef5c3ee4f8113d078a9d1856f4e20614ed47cf39db594d3509e61729.scope: Deactivated successfully.
Dec 09 16:33:57 compute-0 sudo[253076]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:57 compute-0 sudo[253224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:33:57 compute-0 sudo[253224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:57 compute-0 sudo[253224]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:57 compute-0 sudo[253249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:33:57 compute-0 sudo[253249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:33:57 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2675772403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.642 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.791 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.793 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5088MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.793 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.793 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:33:57 compute-0 podman[253287]: 2025-12-09 16:33:57.80735759 +0000 UTC m=+0.039652726 container create 8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jang, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:33:57 compute-0 systemd[1]: Started libpod-conmon-8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf.scope.
Dec 09 16:33:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.860 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.861 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:33:57 compute-0 podman[253287]: 2025-12-09 16:33:57.869637131 +0000 UTC m=+0.101932287 container init 8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jang, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:33:57 compute-0 podman[253287]: 2025-12-09 16:33:57.8757365 +0000 UTC m=+0.108031636 container start 8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 09 16:33:57 compute-0 podman[253287]: 2025-12-09 16:33:57.878832055 +0000 UTC m=+0.111127211 container attach 8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:33:57 compute-0 recursing_jang[253303]: 167 167
Dec 09 16:33:57 compute-0 systemd[1]: libpod-8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf.scope: Deactivated successfully.
Dec 09 16:33:57 compute-0 conmon[253303]: conmon 8437c81e540951bc7407 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf.scope/container/memory.events
Dec 09 16:33:57 compute-0 podman[253287]: 2025-12-09 16:33:57.882406764 +0000 UTC m=+0.114701900 container died 8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jang, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 16:33:57 compute-0 nova_compute[243452]: 2025-12-09 16:33:57.882 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:33:57 compute-0 podman[253287]: 2025-12-09 16:33:57.791352088 +0000 UTC m=+0.023647234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:33:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4979ab93d0ea969adfa36be13b200e75bdcaedcb43d63f5ee637b8a0a90431a2-merged.mount: Deactivated successfully.
Dec 09 16:33:57 compute-0 podman[253287]: 2025-12-09 16:33:57.916484636 +0000 UTC m=+0.148779772 container remove 8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jang, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:33:57 compute-0 systemd[1]: libpod-conmon-8437c81e540951bc74076f3c835d24ecb1ef7c804f41d7fe894d1ec2313966cf.scope: Deactivated successfully.
Dec 09 16:33:58 compute-0 ceph-mon[75222]: pgmap v1055: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:58 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2675772403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:33:58 compute-0 podman[253348]: 2025-12-09 16:33:58.104817871 +0000 UTC m=+0.040774668 container create 0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_satoshi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:33:58 compute-0 systemd[1]: Started libpod-conmon-0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831.scope.
Dec 09 16:33:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4dbd0202f9f3b2823a03d55856e59ef22c6e92ccfa39a33550c29f69944c0b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4dbd0202f9f3b2823a03d55856e59ef22c6e92ccfa39a33550c29f69944c0b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4dbd0202f9f3b2823a03d55856e59ef22c6e92ccfa39a33550c29f69944c0b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4dbd0202f9f3b2823a03d55856e59ef22c6e92ccfa39a33550c29f69944c0b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:58 compute-0 podman[253348]: 2025-12-09 16:33:58.088248883 +0000 UTC m=+0.024205700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:33:58 compute-0 podman[253348]: 2025-12-09 16:33:58.197825801 +0000 UTC m=+0.133782628 container init 0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_satoshi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:33:58 compute-0 podman[253348]: 2025-12-09 16:33:58.205271227 +0000 UTC m=+0.141228024 container start 0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:33:58 compute-0 podman[253348]: 2025-12-09 16:33:58.209303308 +0000 UTC m=+0.145260125 container attach 0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:33:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:33:58 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752129098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:33:58 compute-0 silly_satoshi[253364]: {
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:     "0": [
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:         {
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "devices": [
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "/dev/loop3"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             ],
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_name": "ceph_lv0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_size": "21470642176",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "name": "ceph_lv0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "tags": {
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cluster_name": "ceph",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.crush_device_class": "",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.encrypted": "0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.objectstore": "bluestore",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osd_id": "0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.type": "block",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.vdo": "0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.with_tpm": "0"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             },
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "type": "block",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "vg_name": "ceph_vg0"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:         }
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:     ],
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:     "1": [
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:         {
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "devices": [
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "/dev/loop4"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             ],
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_name": "ceph_lv1",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_size": "21470642176",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "name": "ceph_lv1",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "tags": {
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cluster_name": "ceph",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.crush_device_class": "",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.encrypted": "0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.objectstore": "bluestore",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osd_id": "1",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.type": "block",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.vdo": "0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.with_tpm": "0"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             },
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "type": "block",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "vg_name": "ceph_vg1"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:         }
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:     ],
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:     "2": [
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:         {
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "devices": [
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "/dev/loop5"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             ],
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_name": "ceph_lv2",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_size": "21470642176",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "name": "ceph_lv2",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "tags": {
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.cluster_name": "ceph",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.crush_device_class": "",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.encrypted": "0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.objectstore": "bluestore",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osd_id": "2",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.type": "block",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.vdo": "0",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:                 "ceph.with_tpm": "0"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             },
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "type": "block",
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:             "vg_name": "ceph_vg2"
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:         }
Dec 09 16:33:58 compute-0 silly_satoshi[253364]:     ]
Dec 09 16:33:58 compute-0 silly_satoshi[253364]: }
Dec 09 16:33:58 compute-0 nova_compute[243452]: 2025-12-09 16:33:58.504 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:33:58 compute-0 nova_compute[243452]: 2025-12-09 16:33:58.512 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:33:58 compute-0 systemd[1]: libpod-0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831.scope: Deactivated successfully.
Dec 09 16:33:58 compute-0 podman[253348]: 2025-12-09 16:33:58.529410464 +0000 UTC m=+0.465367262 container died 0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_satoshi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:33:58 compute-0 nova_compute[243452]: 2025-12-09 16:33:58.533 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:33:58 compute-0 nova_compute[243452]: 2025-12-09 16:33:58.535 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:33:58 compute-0 nova_compute[243452]: 2025-12-09 16:33:58.535 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4dbd0202f9f3b2823a03d55856e59ef22c6e92ccfa39a33550c29f69944c0b3-merged.mount: Deactivated successfully.
Dec 09 16:33:58 compute-0 podman[253348]: 2025-12-09 16:33:58.56073559 +0000 UTC m=+0.496692387 container remove 0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_satoshi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:33:58 compute-0 systemd[1]: libpod-conmon-0af65d78b5ebd0d7dda12405cf68e0439b6a9c882b93295c7d4e98e42eb44831.scope: Deactivated successfully.
Dec 09 16:33:58 compute-0 sudo[253249]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:33:58 compute-0 sudo[253386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:33:58 compute-0 sudo[253386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:58 compute-0 sudo[253386]: pam_unix(sudo:session): session closed for user root
Dec 09 16:33:58 compute-0 sudo[253411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:33:58 compute-0 sudo[253411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:33:59 compute-0 podman[253449]: 2025-12-09 16:33:59.036848318 +0000 UTC m=+0.045715405 container create edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:33:59 compute-0 systemd[1]: Started libpod-conmon-edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e.scope.
Dec 09 16:33:59 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/752129098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:33:59 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:33:59 compute-0 podman[253449]: 2025-12-09 16:33:59.016149736 +0000 UTC m=+0.025016813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:33:59 compute-0 podman[253449]: 2025-12-09 16:33:59.11690782 +0000 UTC m=+0.125774907 container init edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ride, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:33:59 compute-0 podman[253449]: 2025-12-09 16:33:59.122364851 +0000 UTC m=+0.131231918 container start edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:33:59 compute-0 podman[253449]: 2025-12-09 16:33:59.1259326 +0000 UTC m=+0.134799687 container attach edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 09 16:33:59 compute-0 adoring_ride[253465]: 167 167
Dec 09 16:33:59 compute-0 systemd[1]: libpod-edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e.scope: Deactivated successfully.
Dec 09 16:33:59 compute-0 podman[253449]: 2025-12-09 16:33:59.130167337 +0000 UTC m=+0.139034404 container died edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ride, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:33:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42de74c3d6f4ac894f4f7e2318c48a4fa770f9b3bdfc31f4328ca8314257899-merged.mount: Deactivated successfully.
Dec 09 16:33:59 compute-0 podman[253449]: 2025-12-09 16:33:59.163782745 +0000 UTC m=+0.172649822 container remove edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ride, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:33:59 compute-0 systemd[1]: libpod-conmon-edf9dee3987f90950e1a61ac5022da1c2be5b9a4a8f8380f899e29fea26ab19e.scope: Deactivated successfully.
Dec 09 16:33:59 compute-0 podman[253488]: 2025-12-09 16:33:59.339213644 +0000 UTC m=+0.047716860 container create 17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:33:59 compute-0 systemd[1]: Started libpod-conmon-17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0.scope.
Dec 09 16:33:59 compute-0 podman[253488]: 2025-12-09 16:33:59.312964438 +0000 UTC m=+0.021467634 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:33:59 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebb3e44ead0597ad37672499da75eb11fd1bbc5163134a156c32e82280c229a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebb3e44ead0597ad37672499da75eb11fd1bbc5163134a156c32e82280c229a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebb3e44ead0597ad37672499da75eb11fd1bbc5163134a156c32e82280c229a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebb3e44ead0597ad37672499da75eb11fd1bbc5163134a156c32e82280c229a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:33:59 compute-0 podman[253488]: 2025-12-09 16:33:59.432126241 +0000 UTC m=+0.140629497 container init 17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:33:59 compute-0 podman[253488]: 2025-12-09 16:33:59.439135835 +0000 UTC m=+0.147639041 container start 17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:33:59 compute-0 podman[253488]: 2025-12-09 16:33:59.443416183 +0000 UTC m=+0.151919399 container attach 17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:34:00 compute-0 ceph-mon[75222]: pgmap v1056: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:00 compute-0 lvm[253585]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:34:00 compute-0 lvm[253585]: VG ceph_vg2 finished
Dec 09 16:34:00 compute-0 lvm[253583]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:34:00 compute-0 lvm[253583]: VG ceph_vg0 finished
Dec 09 16:34:00 compute-0 lvm[253584]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:34:00 compute-0 lvm[253584]: VG ceph_vg1 finished
Dec 09 16:34:00 compute-0 distracted_hoover[253504]: {}
Dec 09 16:34:00 compute-0 systemd[1]: libpod-17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0.scope: Deactivated successfully.
Dec 09 16:34:00 compute-0 podman[253488]: 2025-12-09 16:34:00.198546902 +0000 UTC m=+0.907050118 container died 17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:34:00 compute-0 systemd[1]: libpod-17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0.scope: Consumed 1.248s CPU time.
Dec 09 16:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-eebb3e44ead0597ad37672499da75eb11fd1bbc5163134a156c32e82280c229a-merged.mount: Deactivated successfully.
Dec 09 16:34:00 compute-0 podman[253488]: 2025-12-09 16:34:00.419687743 +0000 UTC m=+1.128190929 container remove 17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:34:00 compute-0 systemd[1]: libpod-conmon-17bec0bf206bcdad2f777ff51c4fd10cdb311cd90f022b2a5441daf36d39ebb0.scope: Deactivated successfully.
Dec 09 16:34:00 compute-0 sudo[253411]: pam_unix(sudo:session): session closed for user root
Dec 09 16:34:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:34:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:34:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:34:00 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:34:00 compute-0 sudo[253600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:34:00 compute-0 sudo[253600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:34:00 compute-0 sudo[253600]: pam_unix(sudo:session): session closed for user root
Dec 09 16:34:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:34:01 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:34:01 compute-0 ceph-mon[75222]: pgmap v1057: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:01 compute-0 nova_compute[243452]: 2025-12-09 16:34:01.536 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:03 compute-0 nova_compute[243452]: 2025-12-09 16:34:03.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:03 compute-0 nova_compute[243452]: 2025-12-09 16:34:03.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:34:03 compute-0 nova_compute[243452]: 2025-12-09 16:34:03.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:34:03 compute-0 nova_compute[243452]: 2025-12-09 16:34:03.219 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:34:03 compute-0 nova_compute[243452]: 2025-12-09 16:34:03.220 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:03 compute-0 ceph-mon[75222]: pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:04 compute-0 nova_compute[243452]: 2025-12-09 16:34:04.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:04 compute-0 nova_compute[243452]: 2025-12-09 16:34:04.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:04 compute-0 nova_compute[243452]: 2025-12-09 16:34:04.054 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:34:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:05 compute-0 ceph-mon[75222]: pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:06 compute-0 nova_compute[243452]: 2025-12-09 16:34:06.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:07 compute-0 nova_compute[243452]: 2025-12-09 16:34:07.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:07 compute-0 ceph-mon[75222]: pgmap v1060: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:09 compute-0 nova_compute[243452]: 2025-12-09 16:34:09.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:09 compute-0 ceph-mon[75222]: pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:10 compute-0 nova_compute[243452]: 2025-12-09 16:34:10.046 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:10 compute-0 sshd-session[253625]: Invalid user test from 146.190.31.45 port 33314
Dec 09 16:34:10 compute-0 sshd-session[253625]: Connection closed by invalid user test 146.190.31.45 port 33314 [preauth]
Dec 09 16:34:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:34:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877099984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:34:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:34:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877099984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:34:10 compute-0 podman[253628]: 2025-12-09 16:34:10.143673027 +0000 UTC m=+0.078419248 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 16:34:10 compute-0 podman[253627]: 2025-12-09 16:34:10.178823948 +0000 UTC m=+0.122259979 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 09 16:34:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2877099984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:34:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2877099984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:34:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:11 compute-0 ceph-mon[75222]: pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:34:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6453 writes, 26K keys, 6453 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6453 writes, 1287 syncs, 5.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 567 writes, 1336 keys, 567 commit groups, 1.0 writes per commit group, ingest: 0.80 MB, 0.00 MB/s
                                           Interval WAL: 567 writes, 257 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:34:13 compute-0 ceph-mon[75222]: pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:15 compute-0 ceph-mon[75222]: pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:34:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 8023 writes, 31K keys, 8023 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8023 writes, 1829 syncs, 4.39 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 892 writes, 2343 keys, 892 commit groups, 1.0 writes per commit group, ingest: 1.12 MB, 0.00 MB/s
                                           Interval WAL: 892 writes, 396 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:34:17 compute-0 ceph-mon[75222]: pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:34:17.855 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:34:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:34:17.855 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:34:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:34:17.855 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:34:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:19 compute-0 ceph-mon[75222]: pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:21 compute-0 ceph-mon[75222]: pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:34:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6567 writes, 26K keys, 6567 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6567 writes, 1311 syncs, 5.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 839 writes, 2350 keys, 839 commit groups, 1.0 writes per commit group, ingest: 1.16 MB, 0.00 MB/s
                                           Interval WAL: 839 writes, 377 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:34:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:23 compute-0 ceph-mon[75222]: pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:24 compute-0 ceph-mgr[75515]: [devicehealth INFO root] Check health
Dec 09 16:34:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:25 compute-0 podman[253672]: 2025-12-09 16:34:25.646585103 +0000 UTC m=+0.088920992 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 16:34:25 compute-0 ceph-mon[75222]: pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:34:25
Dec 09 16:34:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:34:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:34:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.control', '.rgw.root']
Dec 09 16:34:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:34:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:34:27 compute-0 ceph-mon[75222]: pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:29 compute-0 ceph-mon[75222]: pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:31 compute-0 ceph-mon[75222]: pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:33 compute-0 ceph-mon[75222]: pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:35 compute-0 ceph-mon[75222]: pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.427166042984902e-07 of space, bias 1.0, pg target 0.00025281498128954704 quantized to 32 (current 32)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7749663169956723e-06 of space, bias 4.0, pg target 0.0021299595803948067 quantized to 16 (current 16)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:34:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:34:37 compute-0 ceph-mon[75222]: pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:39 compute-0 ceph-mon[75222]: pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:40 compute-0 podman[253693]: 2025-12-09 16:34:40.629653922 +0000 UTC m=+0.058813319 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 09 16:34:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:40 compute-0 podman[253692]: 2025-12-09 16:34:40.668009909 +0000 UTC m=+0.097255089 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 16:34:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:41 compute-0 ceph-mon[75222]: pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:43 compute-0 ceph-mon[75222]: pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:46 compute-0 ceph-mon[75222]: pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:48 compute-0 ceph-mon[75222]: pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:50 compute-0 ceph-mon[75222]: pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:52 compute-0 ceph-mon[75222]: pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:54 compute-0 ceph-mon[75222]: pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:34:56 compute-0 ceph-mon[75222]: pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:34:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:34:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:34:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:34:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:34:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:34:56 compute-0 sshd-session[253736]: Invalid user test from 146.190.31.45 port 55160
Dec 09 16:34:56 compute-0 podman[253738]: 2025-12-09 16:34:56.617633794 +0000 UTC m=+0.058675885 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 16:34:56 compute-0 sshd-session[253736]: Connection closed by invalid user test 146.190.31.45 port 55160 [preauth]
Dec 09 16:34:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.082 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.083 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.084 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.084 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.084 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:34:57 compute-0 ceph-mon[75222]: pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:57 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:34:57 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656365633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.643 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.814 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.815 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5127MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.816 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.816 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.880 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.881 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:34:57 compute-0 nova_compute[243452]: 2025-12-09 16:34:57.901 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:34:58 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:34:58 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596319672' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:34:58 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2656365633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:34:58 compute-0 nova_compute[243452]: 2025-12-09 16:34:58.437 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:34:58 compute-0 nova_compute[243452]: 2025-12-09 16:34:58.443 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:34:58 compute-0 nova_compute[243452]: 2025-12-09 16:34:58.545 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:34:58 compute-0 nova_compute[243452]: 2025-12-09 16:34:58.548 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:34:58 compute-0 nova_compute[243452]: 2025-12-09 16:34:58.549 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:34:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:34:59 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2596319672' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:34:59 compute-0 ceph-mon[75222]: pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:00 compute-0 sudo[253802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:35:00 compute-0 sudo[253802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:00 compute-0 sudo[253802]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:00 compute-0 sudo[253827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 09 16:35:00 compute-0 sudo[253827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:00 compute-0 sudo[253827]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:35:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:35:01 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:01 compute-0 sudo[253872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:35:01 compute-0 sudo[253872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:01 compute-0 sudo[253872]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:01 compute-0 sudo[253897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:35:01 compute-0 sudo[253897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:01 compute-0 nova_compute[243452]: 2025-12-09 16:35:01.549 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:02 compute-0 ceph-mon[75222]: pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:02 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:02 compute-0 sudo[253897]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:02 compute-0 sudo[253953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:35:02 compute-0 sudo[253953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:02 compute-0 sudo[253953]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:02 compute-0 sudo[253978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- inventory --format=json-pretty --filter-for-batch
Dec 09 16:35:02 compute-0 sudo[253978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:02 compute-0 podman[254013]: 2025-12-09 16:35:02.621775233 +0000 UTC m=+0.053310453 container create 42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:35:02 compute-0 systemd[1]: Started libpod-conmon-42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1.scope.
Dec 09 16:35:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:35:02 compute-0 podman[254013]: 2025-12-09 16:35:02.601162549 +0000 UTC m=+0.032697809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:35:02 compute-0 podman[254013]: 2025-12-09 16:35:02.708482022 +0000 UTC m=+0.140017302 container init 42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:35:02 compute-0 podman[254013]: 2025-12-09 16:35:02.72109797 +0000 UTC m=+0.152633190 container start 42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:35:02 compute-0 podman[254013]: 2025-12-09 16:35:02.725538326 +0000 UTC m=+0.157073636 container attach 42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:35:02 compute-0 infallible_noyce[254029]: 167 167
Dec 09 16:35:02 compute-0 systemd[1]: libpod-42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1.scope: Deactivated successfully.
Dec 09 16:35:02 compute-0 conmon[254029]: conmon 42c97240b4b0885ef102 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1.scope/container/memory.events
Dec 09 16:35:02 compute-0 podman[254013]: 2025-12-09 16:35:02.729019674 +0000 UTC m=+0.160554914 container died 42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noyce, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a7ed6ae794ef7b0f340c6bf4e6c1d2492be2c31ddb72e4b0263671864ac2b89-merged.mount: Deactivated successfully.
Dec 09 16:35:02 compute-0 podman[254013]: 2025-12-09 16:35:02.771025216 +0000 UTC m=+0.202560426 container remove 42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noyce, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Dec 09 16:35:02 compute-0 systemd[1]: libpod-conmon-42c97240b4b0885ef10279843456d2aa8a57e570b45cd63890a5bbb1ad74c4b1.scope: Deactivated successfully.
Dec 09 16:35:02 compute-0 podman[254053]: 2025-12-09 16:35:02.951301468 +0000 UTC m=+0.058280264 container create b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_moser, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:35:03 compute-0 systemd[1]: Started libpod-conmon-b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b.scope.
Dec 09 16:35:03 compute-0 podman[254053]: 2025-12-09 16:35:02.934492901 +0000 UTC m=+0.041471707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:35:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe145ddbf389e695f58dae7aff37d0f7626866f9a27c3cfffeba687ff6b767d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe145ddbf389e695f58dae7aff37d0f7626866f9a27c3cfffeba687ff6b767d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe145ddbf389e695f58dae7aff37d0f7626866f9a27c3cfffeba687ff6b767d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe145ddbf389e695f58dae7aff37d0f7626866f9a27c3cfffeba687ff6b767d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:03 compute-0 nova_compute[243452]: 2025-12-09 16:35:03.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:03 compute-0 nova_compute[243452]: 2025-12-09 16:35:03.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:35:03 compute-0 nova_compute[243452]: 2025-12-09 16:35:03.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:35:03 compute-0 podman[254053]: 2025-12-09 16:35:03.064256131 +0000 UTC m=+0.171234937 container init b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:35:03 compute-0 podman[254053]: 2025-12-09 16:35:03.079176144 +0000 UTC m=+0.186154940 container start b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:35:03 compute-0 nova_compute[243452]: 2025-12-09 16:35:03.081 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:35:03 compute-0 nova_compute[243452]: 2025-12-09 16:35:03.082 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:03 compute-0 podman[254053]: 2025-12-09 16:35:03.083675051 +0000 UTC m=+0.190653887 container attach b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:35:03 compute-0 youthful_moser[254069]: [
Dec 09 16:35:03 compute-0 youthful_moser[254069]:     {
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "available": false,
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "being_replaced": false,
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "ceph_device_lvm": false,
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "lsm_data": {},
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "lvs": [],
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "path": "/dev/sr0",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "rejected_reasons": [
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "Has a FileSystem",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "Insufficient space (<5GB)"
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         ],
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         "sys_api": {
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "actuators": null,
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "device_nodes": [
Dec 09 16:35:03 compute-0 youthful_moser[254069]:                 "sr0"
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             ],
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "devname": "sr0",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "human_readable_size": "482.00 KB",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "id_bus": "ata",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "model": "QEMU DVD-ROM",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "nr_requests": "2",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "parent": "/dev/sr0",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "partitions": {},
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "path": "/dev/sr0",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "removable": "1",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "rev": "2.5+",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "ro": "0",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "rotational": "1",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "sas_address": "",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "sas_device_handle": "",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "scheduler_mode": "mq-deadline",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "sectors": 0,
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "sectorsize": "2048",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "size": 493568.0,
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "support_discard": "2048",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "type": "disk",
Dec 09 16:35:03 compute-0 youthful_moser[254069]:             "vendor": "QEMU"
Dec 09 16:35:03 compute-0 youthful_moser[254069]:         }
Dec 09 16:35:03 compute-0 youthful_moser[254069]:     }
Dec 09 16:35:03 compute-0 youthful_moser[254069]: ]
Dec 09 16:35:03 compute-0 systemd[1]: libpod-b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b.scope: Deactivated successfully.
Dec 09 16:35:03 compute-0 podman[254053]: 2025-12-09 16:35:03.577693591 +0000 UTC m=+0.684672387 container died b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fe145ddbf389e695f58dae7aff37d0f7626866f9a27c3cfffeba687ff6b767d-merged.mount: Deactivated successfully.
Dec 09 16:35:03 compute-0 podman[254053]: 2025-12-09 16:35:03.619019303 +0000 UTC m=+0.725998109 container remove b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_moser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:35:03 compute-0 systemd[1]: libpod-conmon-b9a6d609b9c1e7a3c226aadfb2a34eefff9128f132d2c9538f7b46407a154f7b.scope: Deactivated successfully.
Dec 09 16:35:03 compute-0 sudo[253978]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:35:03 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:35:03 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:35:03 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:35:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:35:03 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:35:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:35:03 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:35:03 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:35:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:35:03 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:35:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:35:03 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:35:03 compute-0 sudo[254894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:35:03 compute-0 sudo[254894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:03 compute-0 sudo[254894]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:03 compute-0 sudo[254919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:35:03 compute-0 sudo[254919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:04 compute-0 ceph-mon[75222]: pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:35:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:35:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:35:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:35:04 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:35:04 compute-0 podman[254956]: 2025-12-09 16:35:04.052798543 +0000 UTC m=+0.034543430 container create 97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_robinson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:35:04 compute-0 nova_compute[243452]: 2025-12-09 16:35:04.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:04 compute-0 nova_compute[243452]: 2025-12-09 16:35:04.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:35:04 compute-0 systemd[1]: Started libpod-conmon-97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc.scope.
Dec 09 16:35:04 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:35:04 compute-0 podman[254956]: 2025-12-09 16:35:04.122168901 +0000 UTC m=+0.103913868 container init 97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_robinson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:35:04 compute-0 podman[254956]: 2025-12-09 16:35:04.129137858 +0000 UTC m=+0.110882735 container start 97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_robinson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:35:04 compute-0 youthful_robinson[254972]: 167 167
Dec 09 16:35:04 compute-0 podman[254956]: 2025-12-09 16:35:04.132178164 +0000 UTC m=+0.113923081 container attach 97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_robinson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:35:04 compute-0 systemd[1]: libpod-97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc.scope: Deactivated successfully.
Dec 09 16:35:04 compute-0 podman[254956]: 2025-12-09 16:35:04.135345464 +0000 UTC m=+0.117090341 container died 97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_robinson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:35:04 compute-0 podman[254956]: 2025-12-09 16:35:04.037804478 +0000 UTC m=+0.019549375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e86c9d8481f353a3065163bf5988a24ba689054ed747e41adf4220a9e3339a77-merged.mount: Deactivated successfully.
Dec 09 16:35:04 compute-0 podman[254956]: 2025-12-09 16:35:04.166477437 +0000 UTC m=+0.148222314 container remove 97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:35:04 compute-0 systemd[1]: libpod-conmon-97a50b4858a54eafba266a00668508d5f06c3c7583e6bc40cf6ae729cef3affc.scope: Deactivated successfully.
Dec 09 16:35:04 compute-0 podman[254995]: 2025-12-09 16:35:04.322736578 +0000 UTC m=+0.039798169 container create 4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 09 16:35:04 compute-0 systemd[1]: Started libpod-conmon-4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f.scope.
Dec 09 16:35:04 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb2e78e6adf6146cb75686725621841fdc1920cf4eb390c044b611fb315e821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb2e78e6adf6146cb75686725621841fdc1920cf4eb390c044b611fb315e821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb2e78e6adf6146cb75686725621841fdc1920cf4eb390c044b611fb315e821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb2e78e6adf6146cb75686725621841fdc1920cf4eb390c044b611fb315e821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb2e78e6adf6146cb75686725621841fdc1920cf4eb390c044b611fb315e821/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:04 compute-0 podman[254995]: 2025-12-09 16:35:04.387815024 +0000 UTC m=+0.104876625 container init 4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Dec 09 16:35:04 compute-0 podman[254995]: 2025-12-09 16:35:04.398748354 +0000 UTC m=+0.115809935 container start 4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 09 16:35:04 compute-0 podman[254995]: 2025-12-09 16:35:04.30447658 +0000 UTC m=+0.021538191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:35:04 compute-0 podman[254995]: 2025-12-09 16:35:04.40250191 +0000 UTC m=+0.119563511 container attach 4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:35:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:04 compute-0 beautiful_shamir[255011]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:35:04 compute-0 beautiful_shamir[255011]: --> All data devices are unavailable
Dec 09 16:35:04 compute-0 systemd[1]: libpod-4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f.scope: Deactivated successfully.
Dec 09 16:35:04 compute-0 podman[254995]: 2025-12-09 16:35:04.922161936 +0000 UTC m=+0.639223517 container died 4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfb2e78e6adf6146cb75686725621841fdc1920cf4eb390c044b611fb315e821-merged.mount: Deactivated successfully.
Dec 09 16:35:04 compute-0 podman[254995]: 2025-12-09 16:35:04.965490915 +0000 UTC m=+0.682552536 container remove 4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:35:04 compute-0 systemd[1]: libpod-conmon-4ea784fb171c14e9ea2eea3fa8f120409f62dd61a32a25c1d7cf890f0fda3b0f.scope: Deactivated successfully.
Dec 09 16:35:05 compute-0 sudo[254919]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:05 compute-0 nova_compute[243452]: 2025-12-09 16:35:05.048 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:05 compute-0 sudo[255043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:35:05 compute-0 sudo[255043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:05 compute-0 sudo[255043]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:05 compute-0 sudo[255068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:35:05 compute-0 sudo[255068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:05 compute-0 podman[255106]: 2025-12-09 16:35:05.413587982 +0000 UTC m=+0.036702992 container create 5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhabha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:35:05 compute-0 systemd[1]: Started libpod-conmon-5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089.scope.
Dec 09 16:35:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:35:05 compute-0 podman[255106]: 2025-12-09 16:35:05.397103845 +0000 UTC m=+0.020218875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:35:05 compute-0 podman[255106]: 2025-12-09 16:35:05.494552438 +0000 UTC m=+0.117667478 container init 5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:35:05 compute-0 podman[255106]: 2025-12-09 16:35:05.500657991 +0000 UTC m=+0.123773001 container start 5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:35:05 compute-0 podman[255106]: 2025-12-09 16:35:05.50378024 +0000 UTC m=+0.126895310 container attach 5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhabha, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:35:05 compute-0 adoring_bhabha[255122]: 167 167
Dec 09 16:35:05 compute-0 systemd[1]: libpod-5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089.scope: Deactivated successfully.
Dec 09 16:35:05 compute-0 podman[255106]: 2025-12-09 16:35:05.507516175 +0000 UTC m=+0.130631185 container died 5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhabha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffd7182541c4a263e515fda1d7a30013acbd61c4f204d4c694d7d181a3e05f1b-merged.mount: Deactivated successfully.
Dec 09 16:35:05 compute-0 podman[255106]: 2025-12-09 16:35:05.552369267 +0000 UTC m=+0.175484297 container remove 5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:35:05 compute-0 systemd[1]: libpod-conmon-5224457076c8f1180d4877b5f4545cdd638b1480cd4becd1ffaa724b2a759089.scope: Deactivated successfully.
Dec 09 16:35:05 compute-0 podman[255147]: 2025-12-09 16:35:05.706449927 +0000 UTC m=+0.033896732 container create a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:35:05 compute-0 systemd[1]: Started libpod-conmon-a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c.scope.
Dec 09 16:35:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a64f2e69dfd1c32ea8344b85f3c065c2b8f1aa1a6fb2447e0c8c6e126792a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a64f2e69dfd1c32ea8344b85f3c065c2b8f1aa1a6fb2447e0c8c6e126792a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a64f2e69dfd1c32ea8344b85f3c065c2b8f1aa1a6fb2447e0c8c6e126792a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a64f2e69dfd1c32ea8344b85f3c065c2b8f1aa1a6fb2447e0c8c6e126792a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:05 compute-0 podman[255147]: 2025-12-09 16:35:05.691585215 +0000 UTC m=+0.019032010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:35:05 compute-0 podman[255147]: 2025-12-09 16:35:05.839696884 +0000 UTC m=+0.167143749 container init a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kare, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:35:05 compute-0 podman[255147]: 2025-12-09 16:35:05.847340871 +0000 UTC m=+0.174787686 container start a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:35:05 compute-0 podman[255147]: 2025-12-09 16:35:05.877143306 +0000 UTC m=+0.204590111 container attach a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kare, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:35:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:06 compute-0 ceph-mon[75222]: pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:06 compute-0 affectionate_kare[255163]: {
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:     "0": [
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:         {
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "devices": [
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "/dev/loop3"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             ],
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_name": "ceph_lv0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_size": "21470642176",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "name": "ceph_lv0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "tags": {
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cluster_name": "ceph",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.crush_device_class": "",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.encrypted": "0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.objectstore": "bluestore",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osd_id": "0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.type": "block",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.vdo": "0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.with_tpm": "0"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             },
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "type": "block",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "vg_name": "ceph_vg0"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:         }
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:     ],
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:     "1": [
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:         {
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "devices": [
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "/dev/loop4"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             ],
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_name": "ceph_lv1",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_size": "21470642176",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "name": "ceph_lv1",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "tags": {
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cluster_name": "ceph",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.crush_device_class": "",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.encrypted": "0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.objectstore": "bluestore",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osd_id": "1",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.type": "block",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.vdo": "0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.with_tpm": "0"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             },
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "type": "block",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "vg_name": "ceph_vg1"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:         }
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:     ],
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:     "2": [
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:         {
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "devices": [
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "/dev/loop5"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             ],
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_name": "ceph_lv2",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_size": "21470642176",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "name": "ceph_lv2",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "tags": {
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.cluster_name": "ceph",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.crush_device_class": "",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.encrypted": "0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.objectstore": "bluestore",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osd_id": "2",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.type": "block",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.vdo": "0",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:                 "ceph.with_tpm": "0"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             },
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "type": "block",
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:             "vg_name": "ceph_vg2"
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:         }
Dec 09 16:35:06 compute-0 affectionate_kare[255163]:     ]
Dec 09 16:35:06 compute-0 affectionate_kare[255163]: }
Dec 09 16:35:06 compute-0 systemd[1]: libpod-a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c.scope: Deactivated successfully.
Dec 09 16:35:06 compute-0 podman[255147]: 2025-12-09 16:35:06.234802838 +0000 UTC m=+0.562249613 container died a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a64f2e69dfd1c32ea8344b85f3c065c2b8f1aa1a6fb2447e0c8c6e126792a2-merged.mount: Deactivated successfully.
Dec 09 16:35:06 compute-0 podman[255147]: 2025-12-09 16:35:06.270576753 +0000 UTC m=+0.598023528 container remove a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:35:06 compute-0 systemd[1]: libpod-conmon-a7e40cbb83c74c49d7a359913e0b9212fc25210a538042c5f89338fc150c5d8c.scope: Deactivated successfully.
Dec 09 16:35:06 compute-0 sudo[255068]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:06 compute-0 sudo[255185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:35:06 compute-0 sudo[255185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:06 compute-0 sudo[255185]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:06 compute-0 sudo[255210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:35:06 compute-0 sudo[255210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:06 compute-0 podman[255247]: 2025-12-09 16:35:06.787370818 +0000 UTC m=+0.041780336 container create 58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:35:06 compute-0 systemd[1]: Started libpod-conmon-58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610.scope.
Dec 09 16:35:06 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:35:06 compute-0 podman[255247]: 2025-12-09 16:35:06.851618519 +0000 UTC m=+0.106028137 container init 58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noether, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:35:06 compute-0 podman[255247]: 2025-12-09 16:35:06.859275986 +0000 UTC m=+0.113685514 container start 58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noether, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:35:06 compute-0 podman[255247]: 2025-12-09 16:35:06.768880453 +0000 UTC m=+0.023290031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:35:06 compute-0 podman[255247]: 2025-12-09 16:35:06.86326316 +0000 UTC m=+0.117672788 container attach 58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 16:35:06 compute-0 strange_noether[255263]: 167 167
Dec 09 16:35:06 compute-0 systemd[1]: libpod-58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610.scope: Deactivated successfully.
Dec 09 16:35:06 compute-0 podman[255247]: 2025-12-09 16:35:06.867414447 +0000 UTC m=+0.121823975 container died 58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noether, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3ab3587633fca9ccc95892d967acca3a7198034fde30e15fac0329b608760b8-merged.mount: Deactivated successfully.
Dec 09 16:35:06 compute-0 podman[255247]: 2025-12-09 16:35:06.899849127 +0000 UTC m=+0.154258655 container remove 58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:35:06 compute-0 systemd[1]: libpod-conmon-58ab3597b6574435fe160bead4b216f9d3f525d89ad33dee90ab6b7849439610.scope: Deactivated successfully.
Dec 09 16:35:07 compute-0 nova_compute[243452]: 2025-12-09 16:35:07.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:07 compute-0 podman[255288]: 2025-12-09 16:35:07.063539329 +0000 UTC m=+0.036917658 container create d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:35:07 compute-0 systemd[1]: Started libpod-conmon-d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42.scope.
Dec 09 16:35:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4018a880e8047d1329ea583f196103f06a5b36e02f59dc084312eca3d4485322/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4018a880e8047d1329ea583f196103f06a5b36e02f59dc084312eca3d4485322/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4018a880e8047d1329ea583f196103f06a5b36e02f59dc084312eca3d4485322/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4018a880e8047d1329ea583f196103f06a5b36e02f59dc084312eca3d4485322/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:35:07 compute-0 podman[255288]: 2025-12-09 16:35:07.135898151 +0000 UTC m=+0.109276490 container init d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:35:07 compute-0 podman[255288]: 2025-12-09 16:35:07.142171979 +0000 UTC m=+0.115550308 container start d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:35:07 compute-0 podman[255288]: 2025-12-09 16:35:07.048004708 +0000 UTC m=+0.021383057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:35:07 compute-0 podman[255288]: 2025-12-09 16:35:07.144647549 +0000 UTC m=+0.118025938 container attach d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 09 16:35:07 compute-0 lvm[255383]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:35:07 compute-0 lvm[255382]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:35:07 compute-0 lvm[255383]: VG ceph_vg0 finished
Dec 09 16:35:07 compute-0 lvm[255382]: VG ceph_vg1 finished
Dec 09 16:35:07 compute-0 lvm[255385]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:35:07 compute-0 lvm[255385]: VG ceph_vg2 finished
Dec 09 16:35:07 compute-0 lvm[255386]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:35:07 compute-0 lvm[255386]: VG ceph_vg2 finished
Dec 09 16:35:07 compute-0 tender_mccarthy[255304]: {}
Dec 09 16:35:07 compute-0 systemd[1]: libpod-d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42.scope: Deactivated successfully.
Dec 09 16:35:07 compute-0 systemd[1]: libpod-d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42.scope: Consumed 1.273s CPU time.
Dec 09 16:35:07 compute-0 podman[255288]: 2025-12-09 16:35:07.937370668 +0000 UTC m=+0.910749007 container died d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4018a880e8047d1329ea583f196103f06a5b36e02f59dc084312eca3d4485322-merged.mount: Deactivated successfully.
Dec 09 16:35:07 compute-0 podman[255288]: 2025-12-09 16:35:07.97624766 +0000 UTC m=+0.949625999 container remove d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:35:07 compute-0 systemd[1]: libpod-conmon-d0a8de3834f48b7ede36707d136d14d90faccedc40f20e8bd07068ce5d660c42.scope: Deactivated successfully.
Dec 09 16:35:08 compute-0 sudo[255210]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:35:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:35:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:08 compute-0 nova_compute[243452]: 2025-12-09 16:35:08.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:08 compute-0 ceph-mon[75222]: pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:08 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:08 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:35:08 compute-0 sudo[255400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:35:08 compute-0 sudo[255400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:35:08 compute-0 sudo[255400]: pam_unix(sudo:session): session closed for user root
Dec 09 16:35:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:10 compute-0 ceph-mon[75222]: pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:35:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2093376823' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:35:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:35:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2093376823' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:35:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:11 compute-0 nova_compute[243452]: 2025-12-09 16:35:11.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2093376823' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:35:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2093376823' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:35:11 compute-0 podman[255426]: 2025-12-09 16:35:11.608005167 +0000 UTC m=+0.050549395 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 09 16:35:11 compute-0 podman[255425]: 2025-12-09 16:35:11.645567992 +0000 UTC m=+0.087898764 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 09 16:35:12 compute-0 ceph-mon[75222]: pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:14 compute-0 ceph-mon[75222]: pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:16 compute-0 ceph-mon[75222]: pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:35:17.856 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:35:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:35:17.856 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:35:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:35:17.856 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:35:18 compute-0 ceph-mon[75222]: pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 24 op/s
Dec 09 16:35:20 compute-0 ceph-mon[75222]: pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 24 op/s
Dec 09 16:35:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:22 compute-0 ceph-mon[75222]: pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:24 compute-0 ceph-mon[75222]: pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:35:25
Dec 09 16:35:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:35:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:35:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'volumes', 'backups', 'vms', 'cephfs.cephfs.data', 'images']
Dec 09 16:35:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:35:26 compute-0 ceph-mon[75222]: pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:35:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:35:27 compute-0 podman[255467]: 2025-12-09 16:35:27.630119835 +0000 UTC m=+0.073668130 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec 09 16:35:28 compute-0 ceph-mon[75222]: pgmap v1100: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:30 compute-0 ceph-mon[75222]: pgmap v1101: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Dec 09 16:35:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 50 op/s
Dec 09 16:35:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:32 compute-0 ceph-mon[75222]: pgmap v1102: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 50 op/s
Dec 09 16:35:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:34 compute-0 ceph-mon[75222]: pgmap v1103: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:35 compute-0 sshd-session[255488]: Accepted publickey for zuul from 192.168.122.30 port 41108 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:35:35 compute-0 systemd-logind[786]: New session 52 of user zuul.
Dec 09 16:35:35 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 09 16:35:35 compute-0 sshd-session[255488]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:35:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:36 compute-0 ceph-mon[75222]: pgmap v1104: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.427166042984902e-07 of space, bias 1.0, pg target 0.00025281498128954704 quantized to 32 (current 32)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7749663169956723e-06 of space, bias 4.0, pg target 0.0021299595803948067 quantized to 16 (current 16)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:35:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:35:38 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:35:38.129 155091 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:96:e5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:69:89:7d:fc:2e'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 16:35:38 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:35:38.131 155091 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 16:35:38 compute-0 ceph-mon[75222]: pgmap v1105: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:40 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:35:40.135 155091 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=037f0e18-4bfd-4487-a7a8-05ae973391a9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 16:35:40 compute-0 ceph-mon[75222]: pgmap v1106: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:41 compute-0 sshd-session[255722]: Invalid user test from 146.190.31.45 port 33262
Dec 09 16:35:41 compute-0 sshd-session[255722]: Connection closed by invalid user test 146.190.31.45 port 33262 [preauth]
Dec 09 16:35:41 compute-0 sshd-session[255491]: Connection closed by 192.168.122.30 port 41108
Dec 09 16:35:41 compute-0 sshd-session[255488]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:35:41 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 09 16:35:41 compute-0 systemd-logind[786]: Session 52 logged out. Waiting for processes to exit.
Dec 09 16:35:41 compute-0 systemd-logind[786]: Removed session 52.
Dec 09 16:35:41 compute-0 podman[255748]: 2025-12-09 16:35:41.970063199 +0000 UTC m=+0.064281365 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 09 16:35:41 compute-0 podman[255747]: 2025-12-09 16:35:41.997710849 +0000 UTC m=+0.096193056 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 09 16:35:42 compute-0 ceph-mon[75222]: pgmap v1107: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:43 compute-0 ceph-mon[75222]: pgmap v1108: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:45 compute-0 ceph-mon[75222]: pgmap v1109: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:47 compute-0 ceph-mon[75222]: pgmap v1110: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:47.958365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298147958505, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2083, "num_deletes": 254, "total_data_size": 3494562, "memory_usage": 3540624, "flush_reason": "Manual Compaction"}
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298147979223, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3404256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21004, "largest_seqno": 23086, "table_properties": {"data_size": 3394853, "index_size": 5961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19012, "raw_average_key_size": 20, "raw_value_size": 3375940, "raw_average_value_size": 3576, "num_data_blocks": 269, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765297934, "oldest_key_time": 1765297934, "file_creation_time": 1765298147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 21006 microseconds, and 9105 cpu microseconds.
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:47.979400) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3404256 bytes OK
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:47.979462) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:47.980627) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:47.980644) EVENT_LOG_v1 {"time_micros": 1765298147980638, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:47.980665) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3485847, prev total WAL file size 3485847, number of live WAL files 2.
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:47.982167) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3324KB)], [50(7484KB)]
Dec 09 16:35:47 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298147982230, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11068881, "oldest_snapshot_seqno": -1}
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4776 keys, 9314256 bytes, temperature: kUnknown
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298148050569, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9314256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9279554, "index_size": 21664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 117028, "raw_average_key_size": 24, "raw_value_size": 9190459, "raw_average_value_size": 1924, "num_data_blocks": 909, "num_entries": 4776, "num_filter_entries": 4776, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765298147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:48.051052) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9314256 bytes
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:48.052671) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.7 rd, 136.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5298, records dropped: 522 output_compression: NoCompression
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:48.052693) EVENT_LOG_v1 {"time_micros": 1765298148052681, "job": 26, "event": "compaction_finished", "compaction_time_micros": 68463, "compaction_time_cpu_micros": 37602, "output_level": 6, "num_output_files": 1, "total_output_size": 9314256, "num_input_records": 5298, "num_output_records": 4776, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298148053750, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298148055473, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:47.982024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:48.055605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:48.055614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:48.055620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:48.055624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:35:48 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:35:48.055628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:35:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:49 compute-0 ceph-mon[75222]: pgmap v1111: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:51 compute-0 ceph-mon[75222]: pgmap v1112: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:53 compute-0 ceph-mon[75222]: pgmap v1113: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:35:56 compute-0 ceph-mon[75222]: pgmap v1114: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:35:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:35:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:35:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:35:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:35:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:35:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:58 compute-0 ceph-mon[75222]: pgmap v1115: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:58 compute-0 podman[255786]: 2025-12-09 16:35:58.625528311 +0000 UTC m=+0.064107700 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:35:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.196 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.196 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.196 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.196 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.197 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:35:59 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:35:59 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497084961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.766 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.928 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.929 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5108MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.929 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:35:59 compute-0 nova_compute[243452]: 2025-12-09 16:35:59.929 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:36:00 compute-0 nova_compute[243452]: 2025-12-09 16:36:00.045 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:36:00 compute-0 nova_compute[243452]: 2025-12-09 16:36:00.046 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:36:00 compute-0 nova_compute[243452]: 2025-12-09 16:36:00.060 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:36:00 compute-0 ceph-mon[75222]: pgmap v1116: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:00 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1497084961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:36:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:36:00 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2921039242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:36:00 compute-0 nova_compute[243452]: 2025-12-09 16:36:00.657 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:36:00 compute-0 nova_compute[243452]: 2025-12-09 16:36:00.662 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:36:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:00 compute-0 nova_compute[243452]: 2025-12-09 16:36:00.691 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:36:00 compute-0 nova_compute[243452]: 2025-12-09 16:36:00.693 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:36:00 compute-0 nova_compute[243452]: 2025-12-09 16:36:00.694 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:36:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:01 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2921039242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:36:02 compute-0 ceph-mon[75222]: pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:03 compute-0 nova_compute[243452]: 2025-12-09 16:36:03.694 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:04 compute-0 nova_compute[243452]: 2025-12-09 16:36:04.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:04 compute-0 nova_compute[243452]: 2025-12-09 16:36:04.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:36:04 compute-0 nova_compute[243452]: 2025-12-09 16:36:04.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:36:04 compute-0 nova_compute[243452]: 2025-12-09 16:36:04.075 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:36:04 compute-0 ceph-mon[75222]: pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:05 compute-0 nova_compute[243452]: 2025-12-09 16:36:05.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:05 compute-0 nova_compute[243452]: 2025-12-09 16:36:05.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:05 compute-0 nova_compute[243452]: 2025-12-09 16:36:05.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:36:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:06 compute-0 nova_compute[243452]: 2025-12-09 16:36:06.047 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:06 compute-0 ceph-mon[75222]: pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:08 compute-0 nova_compute[243452]: 2025-12-09 16:36:08.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:08 compute-0 nova_compute[243452]: 2025-12-09 16:36:08.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:08 compute-0 ceph-mon[75222]: pgmap v1120: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:08 compute-0 sudo[255852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:36:08 compute-0 sudo[255852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:08 compute-0 sudo[255852]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:08 compute-0 sudo[255877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:36:08 compute-0 sudo[255877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:08 compute-0 sudo[255877]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:36:08 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:36:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:36:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:36:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:36:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:36:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:36:08 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:36:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:36:08 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:36:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:36:08 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:36:09 compute-0 sudo[255932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:36:09 compute-0 sudo[255932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:09 compute-0 sudo[255932]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:09 compute-0 sudo[255957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:36:09 compute-0 sudo[255957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:36:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:36:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:36:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:36:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:36:09 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:36:09 compute-0 podman[255994]: 2025-12-09 16:36:09.346238789 +0000 UTC m=+0.054367335 container create 8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:36:09 compute-0 systemd[1]: Started libpod-conmon-8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e.scope.
Dec 09 16:36:09 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:36:09 compute-0 podman[255994]: 2025-12-09 16:36:09.320627836 +0000 UTC m=+0.028756422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:36:09 compute-0 podman[255994]: 2025-12-09 16:36:09.422120271 +0000 UTC m=+0.130248837 container init 8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 09 16:36:09 compute-0 podman[255994]: 2025-12-09 16:36:09.428306725 +0000 UTC m=+0.136435261 container start 8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 16:36:09 compute-0 podman[255994]: 2025-12-09 16:36:09.431253898 +0000 UTC m=+0.139382444 container attach 8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:36:09 compute-0 nostalgic_kalam[256010]: 167 167
Dec 09 16:36:09 compute-0 systemd[1]: libpod-8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e.scope: Deactivated successfully.
Dec 09 16:36:09 compute-0 podman[255994]: 2025-12-09 16:36:09.435057066 +0000 UTC m=+0.143185612 container died 8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kalam, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffb504eb490285b5fc935b58c53c418968976122b10eed585505d29106765648-merged.mount: Deactivated successfully.
Dec 09 16:36:09 compute-0 podman[255994]: 2025-12-09 16:36:09.48055792 +0000 UTC m=+0.188686466 container remove 8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:36:09 compute-0 systemd[1]: libpod-conmon-8458aca44bab1b9aece3bc620aea28f1326b61362719c59d333162311f28f57e.scope: Deactivated successfully.
Dec 09 16:36:09 compute-0 podman[256032]: 2025-12-09 16:36:09.660154979 +0000 UTC m=+0.060592061 container create 89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:36:09 compute-0 systemd[1]: Started libpod-conmon-89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d.scope.
Dec 09 16:36:09 compute-0 podman[256032]: 2025-12-09 16:36:09.631473089 +0000 UTC m=+0.031910231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:36:09 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5a027a73b4dab8ad3b88200ede58be09597ca90ec3279af0db236fd3b57c17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5a027a73b4dab8ad3b88200ede58be09597ca90ec3279af0db236fd3b57c17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5a027a73b4dab8ad3b88200ede58be09597ca90ec3279af0db236fd3b57c17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5a027a73b4dab8ad3b88200ede58be09597ca90ec3279af0db236fd3b57c17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5a027a73b4dab8ad3b88200ede58be09597ca90ec3279af0db236fd3b57c17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:09 compute-0 podman[256032]: 2025-12-09 16:36:09.762742884 +0000 UTC m=+0.163180006 container init 89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:36:09 compute-0 podman[256032]: 2025-12-09 16:36:09.771902863 +0000 UTC m=+0.172339935 container start 89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:36:09 compute-0 podman[256032]: 2025-12-09 16:36:09.775858825 +0000 UTC m=+0.176295907 container attach 89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ramanujan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:36:10 compute-0 ceph-mon[75222]: pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:36:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296108072' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:36:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:36:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296108072' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:36:10 compute-0 stoic_ramanujan[256048]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:36:10 compute-0 stoic_ramanujan[256048]: --> All data devices are unavailable
Dec 09 16:36:10 compute-0 systemd[1]: libpod-89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d.scope: Deactivated successfully.
Dec 09 16:36:10 compute-0 podman[256032]: 2025-12-09 16:36:10.3376537 +0000 UTC m=+0.738090782 container died 89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ramanujan, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:36:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec5a027a73b4dab8ad3b88200ede58be09597ca90ec3279af0db236fd3b57c17-merged.mount: Deactivated successfully.
Dec 09 16:36:10 compute-0 podman[256032]: 2025-12-09 16:36:10.388170866 +0000 UTC m=+0.788607908 container remove 89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 16:36:10 compute-0 systemd[1]: libpod-conmon-89201b9ca243cf3a93c7c64e86b236c76f1f052e6c02fae822d0c9267bfe9c1d.scope: Deactivated successfully.
Dec 09 16:36:10 compute-0 sudo[255957]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:10 compute-0 sudo[256081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:36:10 compute-0 sudo[256081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:10 compute-0 sudo[256081]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:10 compute-0 sudo[256106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:36:10 compute-0 sudo[256106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:10 compute-0 podman[256143]: 2025-12-09 16:36:10.869626234 +0000 UTC m=+0.047551613 container create 76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:36:10 compute-0 systemd[1]: Started libpod-conmon-76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc.scope.
Dec 09 16:36:10 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:36:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:10 compute-0 podman[256143]: 2025-12-09 16:36:10.848112587 +0000 UTC m=+0.026037996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:36:10 compute-0 podman[256143]: 2025-12-09 16:36:10.948353646 +0000 UTC m=+0.126279035 container init 76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:36:10 compute-0 podman[256143]: 2025-12-09 16:36:10.955484647 +0000 UTC m=+0.133410026 container start 76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:36:10 compute-0 podman[256143]: 2025-12-09 16:36:10.958841552 +0000 UTC m=+0.136766931 container attach 76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:36:10 compute-0 systemd[1]: libpod-76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc.scope: Deactivated successfully.
Dec 09 16:36:10 compute-0 jovial_chaum[256159]: 167 167
Dec 09 16:36:10 compute-0 podman[256143]: 2025-12-09 16:36:10.962535436 +0000 UTC m=+0.140460815 container died 76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chaum, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:36:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1009b7bc1f3964f4be0472f8aa9ad570b48947eebd86a4a19fe2b9bb32615f2d-merged.mount: Deactivated successfully.
Dec 09 16:36:11 compute-0 podman[256143]: 2025-12-09 16:36:11.006537728 +0000 UTC m=+0.184463107 container remove 76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 09 16:36:11 compute-0 systemd[1]: libpod-conmon-76dc3c72fea990e51e6aaf85bd13c7ec75273b83c709a58d2ed30c3fbeea75dc.scope: Deactivated successfully.
Dec 09 16:36:11 compute-0 nova_compute[243452]: 2025-12-09 16:36:11.047 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/296108072' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:36:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/296108072' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:36:11 compute-0 podman[256185]: 2025-12-09 16:36:11.189844522 +0000 UTC m=+0.042344736 container create 26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 16:36:11 compute-0 systemd[1]: Started libpod-conmon-26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c.scope.
Dec 09 16:36:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135d4d92b9d53ddb0b01445bc034dfa8a0fbd74929ffd786d088dfef7af7d91a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135d4d92b9d53ddb0b01445bc034dfa8a0fbd74929ffd786d088dfef7af7d91a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135d4d92b9d53ddb0b01445bc034dfa8a0fbd74929ffd786d088dfef7af7d91a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135d4d92b9d53ddb0b01445bc034dfa8a0fbd74929ffd786d088dfef7af7d91a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:11 compute-0 podman[256185]: 2025-12-09 16:36:11.265219509 +0000 UTC m=+0.117719753 container init 26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:36:11 compute-0 podman[256185]: 2025-12-09 16:36:11.174456608 +0000 UTC m=+0.026956822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:36:11 compute-0 podman[256185]: 2025-12-09 16:36:11.276285292 +0000 UTC m=+0.128785496 container start 26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:36:11 compute-0 podman[256185]: 2025-12-09 16:36:11.280445659 +0000 UTC m=+0.132945853 container attach 26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]: {
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:     "0": [
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:         {
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "devices": [
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "/dev/loop3"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             ],
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_name": "ceph_lv0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_size": "21470642176",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "name": "ceph_lv0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "tags": {
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cluster_name": "ceph",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.crush_device_class": "",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.encrypted": "0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.objectstore": "bluestore",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osd_id": "0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.type": "block",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.vdo": "0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.with_tpm": "0"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             },
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "type": "block",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "vg_name": "ceph_vg0"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:         }
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:     ],
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:     "1": [
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:         {
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "devices": [
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "/dev/loop4"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             ],
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_name": "ceph_lv1",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_size": "21470642176",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "name": "ceph_lv1",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "tags": {
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cluster_name": "ceph",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.crush_device_class": "",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.encrypted": "0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.objectstore": "bluestore",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osd_id": "1",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.type": "block",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.vdo": "0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.with_tpm": "0"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             },
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "type": "block",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "vg_name": "ceph_vg1"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:         }
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:     ],
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:     "2": [
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:         {
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "devices": [
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "/dev/loop5"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             ],
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_name": "ceph_lv2",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_size": "21470642176",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "name": "ceph_lv2",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "tags": {
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.cluster_name": "ceph",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.crush_device_class": "",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.encrypted": "0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.objectstore": "bluestore",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osd_id": "2",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.type": "block",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.vdo": "0",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:                 "ceph.with_tpm": "0"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             },
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "type": "block",
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:             "vg_name": "ceph_vg2"
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:         }
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]:     ]
Dec 09 16:36:11 compute-0 xenodochial_engelbart[256202]: }
Dec 09 16:36:11 compute-0 systemd[1]: libpod-26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c.scope: Deactivated successfully.
Dec 09 16:36:11 compute-0 podman[256211]: 2025-12-09 16:36:11.62805969 +0000 UTC m=+0.022046823 container died 26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-135d4d92b9d53ddb0b01445bc034dfa8a0fbd74929ffd786d088dfef7af7d91a-merged.mount: Deactivated successfully.
Dec 09 16:36:11 compute-0 podman[256211]: 2025-12-09 16:36:11.673669668 +0000 UTC m=+0.067656771 container remove 26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:36:11 compute-0 systemd[1]: libpod-conmon-26d891f63fed644734912b754800623989b73ac7f404d26f67b70d6e05f22e3c.scope: Deactivated successfully.
Dec 09 16:36:11 compute-0 sudo[256106]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:11 compute-0 nova_compute[243452]: 2025-12-09 16:36:11.741 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:36:11 compute-0 sudo[256223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:36:11 compute-0 sudo[256223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:11 compute-0 sudo[256223]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:11 compute-0 sudo[256248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:36:11 compute-0 sudo[256248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:12 compute-0 podman[256284]: 2025-12-09 16:36:12.128876755 +0000 UTC m=+0.039352021 container create d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:36:12 compute-0 ceph-mon[75222]: pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:12 compute-0 systemd[1]: Started libpod-conmon-d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492.scope.
Dec 09 16:36:12 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:36:12 compute-0 podman[256284]: 2025-12-09 16:36:12.200117116 +0000 UTC m=+0.110592362 container init d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:36:12 compute-0 podman[256284]: 2025-12-09 16:36:12.112119622 +0000 UTC m=+0.022594888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:36:12 compute-0 podman[256284]: 2025-12-09 16:36:12.207804403 +0000 UTC m=+0.118279659 container start d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:36:12 compute-0 podman[256284]: 2025-12-09 16:36:12.211334533 +0000 UTC m=+0.121809799 container attach d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_proskuriakova, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:36:12 compute-0 strange_proskuriakova[256302]: 167 167
Dec 09 16:36:12 compute-0 podman[256284]: 2025-12-09 16:36:12.216646192 +0000 UTC m=+0.127121438 container died d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:36:12 compute-0 systemd[1]: libpod-d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492.scope: Deactivated successfully.
Dec 09 16:36:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c8f11ec4c34e618017c79156f52971b5bdf0c6bb1b80e89b71cf250a34a49f-merged.mount: Deactivated successfully.
Dec 09 16:36:12 compute-0 podman[256284]: 2025-12-09 16:36:12.25622623 +0000 UTC m=+0.166701476 container remove d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:36:12 compute-0 podman[256301]: 2025-12-09 16:36:12.259757609 +0000 UTC m=+0.093008406 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 16:36:12 compute-0 systemd[1]: libpod-conmon-d4882da3db125fd6da062e8b8dbead5f6adc611debc842ce00652ce066e85492.scope: Deactivated successfully.
Dec 09 16:36:12 compute-0 podman[256298]: 2025-12-09 16:36:12.300058487 +0000 UTC m=+0.133476669 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 16:36:12 compute-0 podman[256368]: 2025-12-09 16:36:12.468778119 +0000 UTC m=+0.058626026 container create b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pascal, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:36:12 compute-0 systemd[1]: Started libpod-conmon-b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8.scope.
Dec 09 16:36:12 compute-0 podman[256368]: 2025-12-09 16:36:12.439539193 +0000 UTC m=+0.029387140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:36:12 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0df5282818f4428d06198e07f4183fd3221e70dc340a63a43f12e9f27d2c2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0df5282818f4428d06198e07f4183fd3221e70dc340a63a43f12e9f27d2c2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0df5282818f4428d06198e07f4183fd3221e70dc340a63a43f12e9f27d2c2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0df5282818f4428d06198e07f4183fd3221e70dc340a63a43f12e9f27d2c2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:36:12 compute-0 podman[256368]: 2025-12-09 16:36:12.571890489 +0000 UTC m=+0.161738396 container init b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pascal, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:36:12 compute-0 podman[256368]: 2025-12-09 16:36:12.584616678 +0000 UTC m=+0.174464585 container start b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pascal, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 16:36:12 compute-0 podman[256368]: 2025-12-09 16:36:12.588642282 +0000 UTC m=+0.178490269 container attach b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:36:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:13 compute-0 lvm[256463]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:36:13 compute-0 lvm[256463]: VG ceph_vg1 finished
Dec 09 16:36:13 compute-0 lvm[256462]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:36:13 compute-0 lvm[256462]: VG ceph_vg0 finished
Dec 09 16:36:13 compute-0 lvm[256465]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:36:13 compute-0 lvm[256465]: VG ceph_vg2 finished
Dec 09 16:36:13 compute-0 stupefied_pascal[256384]: {}
Dec 09 16:36:13 compute-0 podman[256368]: 2025-12-09 16:36:13.425879032 +0000 UTC m=+1.015726899 container died b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:36:13 compute-0 systemd[1]: libpod-b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8.scope: Deactivated successfully.
Dec 09 16:36:13 compute-0 systemd[1]: libpod-b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8.scope: Consumed 1.414s CPU time.
Dec 09 16:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0df5282818f4428d06198e07f4183fd3221e70dc340a63a43f12e9f27d2c2ef-merged.mount: Deactivated successfully.
Dec 09 16:36:13 compute-0 podman[256368]: 2025-12-09 16:36:13.467329052 +0000 UTC m=+1.057176919 container remove b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pascal, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:36:13 compute-0 systemd[1]: libpod-conmon-b32065039e931f84a0fee790553cec1971d515574b4e00aebca4a5cdb937ecb8.scope: Deactivated successfully.
Dec 09 16:36:13 compute-0 sudo[256248]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:36:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:36:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:36:13 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:36:13 compute-0 sudo[256480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:36:13 compute-0 sudo[256480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:36:13 compute-0 sudo[256480]: pam_unix(sudo:session): session closed for user root
Dec 09 16:36:14 compute-0 ceph-mon[75222]: pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:36:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:36:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:16 compute-0 ceph-mon[75222]: pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:36:17.856 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:36:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:36:17.857 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:36:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:36:17.857 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:36:18 compute-0 ceph-mon[75222]: pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:20 compute-0 ceph-mon[75222]: pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:22 compute-0 ceph-mon[75222]: pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:24 compute-0 ceph-mon[75222]: pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:24 compute-0 sshd-session[256505]: Invalid user test from 146.190.31.45 port 39948
Dec 09 16:36:25 compute-0 sshd-session[256505]: Connection closed by invalid user test 146.190.31.45 port 39948 [preauth]
Dec 09 16:36:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:36:25
Dec 09 16:36:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:36:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:36:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'backups', 'default.rgw.log', 'vms']
Dec 09 16:36:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:36:26 compute-0 ceph-mon[75222]: pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:36:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:36:28 compute-0 ceph-mon[75222]: pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:29 compute-0 podman[256507]: 2025-12-09 16:36:29.631461313 +0000 UTC m=+0.071099648 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:36:30 compute-0 ceph-mon[75222]: pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:32 compute-0 ceph-mon[75222]: pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:34 compute-0 ceph-mon[75222]: pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:36 compute-0 ceph-mon[75222]: pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.427166042984902e-07 of space, bias 1.0, pg target 0.00025281498128954704 quantized to 32 (current 32)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7749663169956723e-06 of space, bias 4.0, pg target 0.0021299595803948067 quantized to 16 (current 16)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:36:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:36:38 compute-0 ceph-mon[75222]: pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:40 compute-0 ceph-mon[75222]: pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:42 compute-0 ceph-mon[75222]: pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:42 compute-0 podman[256528]: 2025-12-09 16:36:42.615555993 +0000 UTC m=+0.052112521 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 09 16:36:42 compute-0 podman[256527]: 2025-12-09 16:36:42.692082873 +0000 UTC m=+0.133534210 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:36:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:44 compute-0 ceph-mon[75222]: pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:46 compute-0 ceph-mon[75222]: pgmap v1139: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:48 compute-0 ceph-mon[75222]: pgmap v1140: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:50 compute-0 ceph-mon[75222]: pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:52 compute-0 ceph-mon[75222]: pgmap v1142: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:54 compute-0 ceph-mon[75222]: pgmap v1143: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:36:56 compute-0 ceph-mon[75222]: pgmap v1144: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:36:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:36:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:36:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:36:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:36:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:36:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:58 compute-0 ceph-mon[75222]: pgmap v1145: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:36:59 compute-0 sshd-session[256572]: Accepted publickey for zuul from 192.168.122.30 port 59966 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:36:59 compute-0 systemd-logind[786]: New session 53 of user zuul.
Dec 09 16:36:59 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec 09 16:36:59 compute-0 sshd-session[256572]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:37:00 compute-0 nova_compute[243452]: 2025-12-09 16:37:00.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:00 compute-0 sudo[256645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain iscsid.service
Dec 09 16:37:00 compute-0 sudo[256645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:00 compute-0 sudo[256645]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:00 compute-0 ceph-mon[75222]: pgmap v1146: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:00 compute-0 podman[256669]: 2025-12-09 16:37:00.401259431 +0000 UTC m=+0.095620248 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 09 16:37:00 compute-0 sudo[256683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_nova_compute.service
Dec 09 16:37:00 compute-0 sudo[256683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:00 compute-0 sudo[256683]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:00 compute-0 nova_compute[243452]: 2025-12-09 16:37:00.432 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:37:00 compute-0 nova_compute[243452]: 2025-12-09 16:37:00.432 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:37:00 compute-0 nova_compute[243452]: 2025-12-09 16:37:00.433 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:37:00 compute-0 nova_compute[243452]: 2025-12-09 16:37:00.433 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:37:00 compute-0 nova_compute[243452]: 2025-12-09 16:37:00.434 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:37:00 compute-0 sudo[256718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_controller.service
Dec 09 16:37:00 compute-0 sudo[256718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:00 compute-0 sudo[256718]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:00 compute-0 sudo[256762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_metadata_agent.service
Dec 09 16:37:00 compute-0 sudo[256762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:00 compute-0 sudo[256762]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:37:00 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256210969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:37:01 compute-0 nova_compute[243452]: 2025-12-09 16:37:01.015 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:37:01 compute-0 nova_compute[243452]: 2025-12-09 16:37:01.198 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:37:01 compute-0 nova_compute[243452]: 2025-12-09 16:37:01.200 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:37:01 compute-0 nova_compute[243452]: 2025-12-09 16:37:01.200 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:37:01 compute-0 nova_compute[243452]: 2025-12-09 16:37:01.200 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:37:01 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3256210969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:37:02 compute-0 nova_compute[243452]: 2025-12-09 16:37:02.100 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:37:02 compute-0 nova_compute[243452]: 2025-12-09 16:37:02.100 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:37:02 compute-0 nova_compute[243452]: 2025-12-09 16:37:02.134 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:37:02 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:37:02.211 155091 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:96:e5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:69:89:7d:fc:2e'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 16:37:02 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:37:02.211 155091 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 16:37:02 compute-0 ceph-mon[75222]: pgmap v1147: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:02 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:37:02 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809199407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:37:02 compute-0 nova_compute[243452]: 2025-12-09 16:37:02.699 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:37:02 compute-0 nova_compute[243452]: 2025-12-09 16:37:02.706 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:37:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:02 compute-0 nova_compute[243452]: 2025-12-09 16:37:02.729 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:37:02 compute-0 nova_compute[243452]: 2025-12-09 16:37:02.731 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:37:02 compute-0 nova_compute[243452]: 2025-12-09 16:37:02.731 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:37:03 compute-0 nova_compute[243452]: 2025-12-09 16:37:03.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:03 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3809199407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:37:04 compute-0 nova_compute[243452]: 2025-12-09 16:37:04.125 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:04 compute-0 ceph-mon[75222]: pgmap v1148: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:05 compute-0 nova_compute[243452]: 2025-12-09 16:37:05.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:05 compute-0 nova_compute[243452]: 2025-12-09 16:37:05.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:37:05 compute-0 nova_compute[243452]: 2025-12-09 16:37:05.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:37:05 compute-0 nova_compute[243452]: 2025-12-09 16:37:05.072 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:37:05 compute-0 nova_compute[243452]: 2025-12-09 16:37:05.072 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:06 compute-0 sshd-session[256811]: Accepted publickey for zuul from 192.168.122.30 port 44956 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:37:06 compute-0 systemd-logind[786]: New session 54 of user zuul.
Dec 09 16:37:06 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec 09 16:37:06 compute-0 sshd-session[256811]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:37:06 compute-0 ceph-mon[75222]: pgmap v1149: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:06 compute-0 sudo[256886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Dec 09 16:37:06 compute-0 sudo[256886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:06 compute-0 sudo[256886]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:06 compute-0 sudo[256912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Dec 09 16:37:06 compute-0 sudo[256912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:06 compute-0 groupadd[256914]: group added to /etc/group: name=podman, GID=42479
Dec 09 16:37:06 compute-0 groupadd[256914]: group added to /etc/gshadow: name=podman
Dec 09 16:37:06 compute-0 groupadd[256914]: new group: name=podman, GID=42479
Dec 09 16:37:06 compute-0 sudo[256912]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:06 compute-0 sudo[256920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Dec 09 16:37:06 compute-0 sudo[256920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:06 compute-0 usermod[256922]: add 'zuul' to group 'podman'
Dec 09 16:37:06 compute-0 usermod[256922]: add 'zuul' to shadow group 'podman'
Dec 09 16:37:06 compute-0 sudo[256920]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:06 compute-0 sshd-session[256856]: Invalid user test from 146.190.31.45 port 36306
Dec 09 16:37:06 compute-0 sudo[256929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Dec 09 16:37:06 compute-0 sudo[256929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:06 compute-0 sudo[256929]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:06 compute-0 sudo[256932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Dec 09 16:37:06 compute-0 sudo[256932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:06 compute-0 sudo[256932]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:06 compute-0 sudo[256935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Dec 09 16:37:06 compute-0 sudo[256935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:06 compute-0 sudo[256935]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:06 compute-0 sudo[256938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Dec 09 16:37:06 compute-0 sshd-session[256856]: Connection closed by invalid user test 146.190.31.45 port 36306 [preauth]
Dec 09 16:37:06 compute-0 sudo[256938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:06 compute-0 sudo[256938]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:07 compute-0 sudo[256941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Dec 09 16:37:07 compute-0 sudo[256941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:07 compute-0 nova_compute[243452]: 2025-12-09 16:37:07.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:07 compute-0 nova_compute[243452]: 2025-12-09 16:37:07.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:07 compute-0 nova_compute[243452]: 2025-12-09 16:37:07.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:37:07 compute-0 nova_compute[243452]: 2025-12-09 16:37:07.056 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:07 compute-0 nova_compute[243452]: 2025-12-09 16:37:07.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 16:37:07 compute-0 nova_compute[243452]: 2025-12-09 16:37:07.073 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 16:37:07 compute-0 sudo[256941]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:07 compute-0 sudo[256944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Dec 09 16:37:07 compute-0 sudo[256944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:07 compute-0 systemd[1]: Reloading.
Dec 09 16:37:07 compute-0 systemd-rc-local-generator[256974]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:37:07 compute-0 systemd-sysv-generator[256978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:37:07 compute-0 sudo[256944]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:07 compute-0 sudo[256981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Dec 09 16:37:07 compute-0 sudo[256981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:07 compute-0 sudo[256981]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:07 compute-0 sudo[256984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Dec 09 16:37:07 compute-0 sudo[256984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:07 compute-0 systemd[1]: Reloading.
Dec 09 16:37:07 compute-0 systemd-sysv-generator[257019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 16:37:07 compute-0 systemd-rc-local-generator[257015]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 16:37:08 compute-0 systemd[1]: Starting Podman API Socket...
Dec 09 16:37:08 compute-0 systemd[1]: Listening on Podman API Socket.
Dec 09 16:37:08 compute-0 sudo[256984]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:08 compute-0 sudo[257023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Dec 09 16:37:08 compute-0 sudo[257023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:08 compute-0 sudo[257023]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:08 compute-0 sudo[257026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Dec 09 16:37:08 compute-0 sudo[257026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:08 compute-0 sudo[257026]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:08 compute-0 sudo[257029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Dec 09 16:37:08 compute-0 sudo[257029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:08 compute-0 sudo[257029]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:08 compute-0 sudo[257032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Dec 09 16:37:08 compute-0 sudo[257032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:08 compute-0 sudo[257032]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:08 compute-0 sudo[257035]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Dec 09 16:37:08 compute-0 sudo[257035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:08 compute-0 sudo[257035]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:08 compute-0 ceph-mon[75222]: pgmap v1150: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:08 compute-0 sudo[257038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Dec 09 16:37:08 compute-0 dbus-broker-launch[772]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Dec 09 16:37:08 compute-0 sudo[257038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:08 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Dec 09 16:37:08 compute-0 systemd[1]: Closed Podman API Socket.
Dec 09 16:37:08 compute-0 systemd[1]: Stopping Podman API Socket...
Dec 09 16:37:08 compute-0 systemd[1]: Starting Podman API Socket...
Dec 09 16:37:08 compute-0 systemd[1]: Listening on Podman API Socket.
Dec 09 16:37:08 compute-0 sudo[257038]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:08 compute-0 sudo[256889]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Dec 09 16:37:08 compute-0 sudo[256889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:08 compute-0 sudo[256889]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:08 compute-0 sshd-session[257044]: Accepted publickey for zuul from 192.168.122.30 port 44970 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:37:08 compute-0 systemd-logind[786]: New session 55 of user zuul.
Dec 09 16:37:08 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec 09 16:37:08 compute-0 sshd-session[257044]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:37:08 compute-0 systemd[1]: Starting Podman API Service...
Dec 09 16:37:08 compute-0 systemd[1]: Started Podman API Service.
Dec 09 16:37:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:08 compute-0 podman[257048]: time="2025-12-09T16:37:08Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 09 16:37:08 compute-0 podman[257048]: time="2025-12-09T16:37:08Z" level=info msg="Setting parallel job count to 25"
Dec 09 16:37:08 compute-0 podman[257048]: time="2025-12-09T16:37:08Z" level=info msg="Using sqlite as database backend"
Dec 09 16:37:08 compute-0 podman[257048]: time="2025-12-09T16:37:08Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 09 16:37:08 compute-0 podman[257048]: time="2025-12-09T16:37:08Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 09 16:37:08 compute-0 podman[257048]: time="2025-12-09T16:37:08Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec 09 16:37:08 compute-0 podman[257048]: @ - - [09/Dec/2025:16:37:08 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Dec 09 16:37:08 compute-0 podman[257048]: @ - - [09/Dec/2025:16:37:08 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 25040 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Dec 09 16:37:09 compute-0 nova_compute[243452]: 2025-12-09 16:37:09.073 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:09 compute-0 ceph-mon[75222]: pgmap v1151: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:10 compute-0 nova_compute[243452]: 2025-12-09 16:37:10.056 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:37:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1768515780' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:37:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:37:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1768515780' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:37:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1768515780' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:37:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1768515780' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:37:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:11 compute-0 nova_compute[243452]: 2025-12-09 16:37:11.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:11 compute-0 ceph-mon[75222]: pgmap v1152: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:12 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:37:12.214 155091 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=037f0e18-4bfd-4487-a7a8-05ae973391a9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 16:37:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:13 compute-0 podman[257062]: 2025-12-09 16:37:13.624415378 +0000 UTC m=+0.067126835 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 16:37:13 compute-0 sudo[257080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:37:13 compute-0 sudo[257080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:13 compute-0 sudo[257080]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:13 compute-0 podman[257061]: 2025-12-09 16:37:13.687704935 +0000 UTC m=+0.129272400 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 09 16:37:13 compute-0 sudo[257127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:37:13 compute-0 sudo[257127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:13 compute-0 ceph-mon[75222]: pgmap v1153: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:14 compute-0 sudo[257127]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 09 16:37:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:37:14 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:37:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:37:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:37:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:37:14 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:37:14 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:37:14 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:37:14 compute-0 sudo[257184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:37:14 compute-0 sudo[257184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:14 compute-0 sudo[257184]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:14 compute-0 sudo[257209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:37:14 compute-0 sudo[257209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:37:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:37:14 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:37:14 compute-0 podman[257247]: 2025-12-09 16:37:14.92127105 +0000 UTC m=+0.055765635 container create 09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 09 16:37:14 compute-0 systemd[1]: Started libpod-conmon-09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a.scope.
Dec 09 16:37:14 compute-0 podman[257247]: 2025-12-09 16:37:14.893867346 +0000 UTC m=+0.028362001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:37:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:37:15 compute-0 podman[257247]: 2025-12-09 16:37:15.033204489 +0000 UTC m=+0.167699094 container init 09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kirch, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:37:15 compute-0 podman[257247]: 2025-12-09 16:37:15.042948654 +0000 UTC m=+0.177443209 container start 09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:37:15 compute-0 podman[257247]: 2025-12-09 16:37:15.046662809 +0000 UTC m=+0.181157364 container attach 09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kirch, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 09 16:37:15 compute-0 competent_kirch[257263]: 167 167
Dec 09 16:37:15 compute-0 systemd[1]: libpod-09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a.scope: Deactivated successfully.
Dec 09 16:37:15 compute-0 conmon[257263]: conmon 09a119547a60cf9ad376 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a.scope/container/memory.events
Dec 09 16:37:15 compute-0 podman[257247]: 2025-12-09 16:37:15.053882643 +0000 UTC m=+0.188377208 container died 09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-099eccbc34cde50ee00ec99f94fb29ae522c11081421f1b5b51f2fdb5a925c94-merged.mount: Deactivated successfully.
Dec 09 16:37:15 compute-0 podman[257247]: 2025-12-09 16:37:15.098566274 +0000 UTC m=+0.233060839 container remove 09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:37:15 compute-0 systemd[1]: libpod-conmon-09a119547a60cf9ad3766cc9566c3553ecd0cec3ef822f82456c421e519ce60a.scope: Deactivated successfully.
Dec 09 16:37:15 compute-0 podman[257287]: 2025-12-09 16:37:15.276118615 +0000 UTC m=+0.044223679 container create 7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_solomon, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:37:15 compute-0 systemd[1]: Started libpod-conmon-7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296.scope.
Dec 09 16:37:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249e426b4769c36b98bcc77185641db19765cd064ebafdb40b78f44d6d0cc801/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:15 compute-0 podman[257287]: 2025-12-09 16:37:15.256673436 +0000 UTC m=+0.024778510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249e426b4769c36b98bcc77185641db19765cd064ebafdb40b78f44d6d0cc801/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249e426b4769c36b98bcc77185641db19765cd064ebafdb40b78f44d6d0cc801/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249e426b4769c36b98bcc77185641db19765cd064ebafdb40b78f44d6d0cc801/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249e426b4769c36b98bcc77185641db19765cd064ebafdb40b78f44d6d0cc801/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:15 compute-0 podman[257287]: 2025-12-09 16:37:15.36629596 +0000 UTC m=+0.134401014 container init 7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:37:15 compute-0 podman[257287]: 2025-12-09 16:37:15.372769173 +0000 UTC m=+0.140874217 container start 7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:37:15 compute-0 podman[257287]: 2025-12-09 16:37:15.376273142 +0000 UTC m=+0.144378176 container attach 7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:37:15 compute-0 ceph-mon[75222]: pgmap v1154: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:15 compute-0 goofy_solomon[257303]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:37:15 compute-0 goofy_solomon[257303]: --> All data devices are unavailable
Dec 09 16:37:15 compute-0 systemd[1]: libpod-7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296.scope: Deactivated successfully.
Dec 09 16:37:15 compute-0 podman[257287]: 2025-12-09 16:37:15.907779973 +0000 UTC m=+0.675885057 container died 7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_solomon, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Dec 09 16:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-249e426b4769c36b98bcc77185641db19765cd064ebafdb40b78f44d6d0cc801-merged.mount: Deactivated successfully.
Dec 09 16:37:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:15 compute-0 podman[257287]: 2025-12-09 16:37:15.95551679 +0000 UTC m=+0.723621834 container remove 7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_solomon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:37:15 compute-0 systemd[1]: libpod-conmon-7a84043194d1fc1157b2921b7af0c06772bb8e48488767a6b06dd671edb95296.scope: Deactivated successfully.
Dec 09 16:37:16 compute-0 sudo[257209]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:16 compute-0 sudo[257333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:37:16 compute-0 sudo[257333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:16 compute-0 sudo[257333]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:16 compute-0 sudo[257358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:37:16 compute-0 sudo[257358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:16 compute-0 podman[257396]: 2025-12-09 16:37:16.473297484 +0000 UTC m=+0.042376817 container create 836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:37:16 compute-0 systemd[1]: Started libpod-conmon-836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391.scope.
Dec 09 16:37:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:37:16 compute-0 podman[257396]: 2025-12-09 16:37:16.456367316 +0000 UTC m=+0.025446649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:37:16 compute-0 podman[257396]: 2025-12-09 16:37:16.557930893 +0000 UTC m=+0.127010276 container init 836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 09 16:37:16 compute-0 podman[257396]: 2025-12-09 16:37:16.566291899 +0000 UTC m=+0.135371242 container start 836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:37:16 compute-0 vigorous_hawking[257413]: 167 167
Dec 09 16:37:16 compute-0 podman[257396]: 2025-12-09 16:37:16.571241899 +0000 UTC m=+0.140321232 container attach 836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:37:16 compute-0 systemd[1]: libpod-836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391.scope: Deactivated successfully.
Dec 09 16:37:16 compute-0 conmon[257413]: conmon 836cc48bbfe0a17317b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391.scope/container/memory.events
Dec 09 16:37:16 compute-0 podman[257396]: 2025-12-09 16:37:16.573479362 +0000 UTC m=+0.142558705 container died 836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 16:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea1b3306092f5a2dabee770f66bd16febdfc01d649914764d8b8f57ad19ad11a-merged.mount: Deactivated successfully.
Dec 09 16:37:16 compute-0 podman[257396]: 2025-12-09 16:37:16.614644894 +0000 UTC m=+0.183724247 container remove 836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hawking, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:37:16 compute-0 systemd[1]: libpod-conmon-836cc48bbfe0a17317b4af13a8e02f0003fcdedc8b21dd01556dc7f9ddfbc391.scope: Deactivated successfully.
Dec 09 16:37:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:16 compute-0 podman[257438]: 2025-12-09 16:37:16.836445624 +0000 UTC m=+0.046936426 container create 096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_matsumoto, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:37:16 compute-0 systemd[1]: Started libpod-conmon-096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef.scope.
Dec 09 16:37:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:37:16 compute-0 podman[257438]: 2025-12-09 16:37:16.817316584 +0000 UTC m=+0.027807396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66422cc9726d9c0e8fb4db9d06636e4805c55cb4c076362ba00d983a335ef39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66422cc9726d9c0e8fb4db9d06636e4805c55cb4c076362ba00d983a335ef39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66422cc9726d9c0e8fb4db9d06636e4805c55cb4c076362ba00d983a335ef39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66422cc9726d9c0e8fb4db9d06636e4805c55cb4c076362ba00d983a335ef39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:16 compute-0 podman[257438]: 2025-12-09 16:37:16.935941722 +0000 UTC m=+0.146432634 container init 096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:37:16 compute-0 podman[257438]: 2025-12-09 16:37:16.943329461 +0000 UTC m=+0.153820273 container start 096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_matsumoto, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:37:16 compute-0 podman[257438]: 2025-12-09 16:37:16.946587432 +0000 UTC m=+0.157078244 container attach 096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_matsumoto, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]: {
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:     "0": [
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:         {
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "devices": [
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "/dev/loop3"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             ],
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_name": "ceph_lv0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_size": "21470642176",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "name": "ceph_lv0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "tags": {
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cluster_name": "ceph",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.crush_device_class": "",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.encrypted": "0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.objectstore": "bluestore",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osd_id": "0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.type": "block",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.vdo": "0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.with_tpm": "0"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             },
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "type": "block",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "vg_name": "ceph_vg0"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:         }
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:     ],
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:     "1": [
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:         {
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "devices": [
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "/dev/loop4"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             ],
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_name": "ceph_lv1",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_size": "21470642176",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "name": "ceph_lv1",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "tags": {
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cluster_name": "ceph",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.crush_device_class": "",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.encrypted": "0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.objectstore": "bluestore",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osd_id": "1",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.type": "block",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.vdo": "0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.with_tpm": "0"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             },
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "type": "block",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "vg_name": "ceph_vg1"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:         }
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:     ],
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:     "2": [
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:         {
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "devices": [
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "/dev/loop5"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             ],
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_name": "ceph_lv2",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_size": "21470642176",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "name": "ceph_lv2",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "tags": {
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.cluster_name": "ceph",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.crush_device_class": "",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.encrypted": "0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.objectstore": "bluestore",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osd_id": "2",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.type": "block",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.vdo": "0",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:                 "ceph.with_tpm": "0"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             },
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "type": "block",
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:             "vg_name": "ceph_vg2"
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:         }
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]:     ]
Dec 09 16:37:17 compute-0 condescending_matsumoto[257454]: }
Dec 09 16:37:17 compute-0 systemd[1]: libpod-096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef.scope: Deactivated successfully.
Dec 09 16:37:17 compute-0 podman[257463]: 2025-12-09 16:37:17.287635378 +0000 UTC m=+0.028208397 container died 096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66422cc9726d9c0e8fb4db9d06636e4805c55cb4c076362ba00d983a335ef39-merged.mount: Deactivated successfully.
Dec 09 16:37:17 compute-0 podman[257463]: 2025-12-09 16:37:17.345784099 +0000 UTC m=+0.086357078 container remove 096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:37:17 compute-0 systemd[1]: libpod-conmon-096f53e1f79046f57796c09f45f897cf3accff20d623247201c2ea27f40c11ef.scope: Deactivated successfully.
Dec 09 16:37:17 compute-0 sudo[257358]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:17 compute-0 sudo[257478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:37:17 compute-0 sudo[257478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:17 compute-0 sudo[257478]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:17 compute-0 sudo[257503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:37:17 compute-0 sudo[257503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:17 compute-0 podman[257540]: 2025-12-09 16:37:17.855887017 +0000 UTC m=+0.062016982 container create c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 16:37:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:37:17.858 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:37:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:37:17.860 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:37:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:37:17.860 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:37:17 compute-0 systemd[1]: Started libpod-conmon-c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27.scope.
Dec 09 16:37:17 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:37:17 compute-0 podman[257540]: 2025-12-09 16:37:17.838591778 +0000 UTC m=+0.044721763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:37:17 compute-0 podman[257540]: 2025-12-09 16:37:17.941839223 +0000 UTC m=+0.147969208 container init c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 16:37:17 compute-0 podman[257540]: 2025-12-09 16:37:17.953676977 +0000 UTC m=+0.159806982 container start c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 16:37:17 compute-0 podman[257540]: 2025-12-09 16:37:17.957233017 +0000 UTC m=+0.163363042 container attach c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:37:17 compute-0 trusting_pare[257557]: 167 167
Dec 09 16:37:17 compute-0 systemd[1]: libpod-c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27.scope: Deactivated successfully.
Dec 09 16:37:17 compute-0 podman[257540]: 2025-12-09 16:37:17.959439549 +0000 UTC m=+0.165569534 container died c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 09 16:37:17 compute-0 ceph-mon[75222]: pgmap v1155: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5420cd58382505a5249897915cf04eafddf5470817cc013146066b0b1851ce0-merged.mount: Deactivated successfully.
Dec 09 16:37:17 compute-0 podman[257540]: 2025-12-09 16:37:17.99560646 +0000 UTC m=+0.201736425 container remove c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:37:18 compute-0 systemd[1]: libpod-conmon-c2edc540b4b2c630d4640e3184a6f9af137ea6b65900036147dc6ee988cdab27.scope: Deactivated successfully.
Dec 09 16:37:18 compute-0 podman[257579]: 2025-12-09 16:37:18.162858821 +0000 UTC m=+0.048832960 container create 67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:37:18 compute-0 systemd[1]: Started libpod-conmon-67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2.scope.
Dec 09 16:37:18 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a37eb1eaa1ed2789c35edc29c4584e4f8819e0fc1d25ccbed0aaac57d1c734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a37eb1eaa1ed2789c35edc29c4584e4f8819e0fc1d25ccbed0aaac57d1c734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a37eb1eaa1ed2789c35edc29c4584e4f8819e0fc1d25ccbed0aaac57d1c734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:18 compute-0 podman[257579]: 2025-12-09 16:37:18.137183796 +0000 UTC m=+0.023158005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a37eb1eaa1ed2789c35edc29c4584e4f8819e0fc1d25ccbed0aaac57d1c734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:37:18 compute-0 podman[257579]: 2025-12-09 16:37:18.241611842 +0000 UTC m=+0.127585991 container init 67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_grothendieck, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:37:18 compute-0 podman[257579]: 2025-12-09 16:37:18.248141857 +0000 UTC m=+0.134115956 container start 67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_grothendieck, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 09 16:37:18 compute-0 podman[257579]: 2025-12-09 16:37:18.253085926 +0000 UTC m=+0.139060035 container attach 67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_grothendieck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:37:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:18 compute-0 lvm[257675]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:37:18 compute-0 lvm[257674]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:37:18 compute-0 lvm[257675]: VG ceph_vg0 finished
Dec 09 16:37:18 compute-0 lvm[257674]: VG ceph_vg1 finished
Dec 09 16:37:18 compute-0 lvm[257677]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:37:18 compute-0 lvm[257677]: VG ceph_vg2 finished
Dec 09 16:37:18 compute-0 nifty_grothendieck[257596]: {}
Dec 09 16:37:19 compute-0 systemd[1]: libpod-67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2.scope: Deactivated successfully.
Dec 09 16:37:19 compute-0 systemd[1]: libpod-67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2.scope: Consumed 1.300s CPU time.
Dec 09 16:37:19 compute-0 podman[257579]: 2025-12-09 16:37:19.025584359 +0000 UTC m=+0.911558468 container died 67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-35a37eb1eaa1ed2789c35edc29c4584e4f8819e0fc1d25ccbed0aaac57d1c734-merged.mount: Deactivated successfully.
Dec 09 16:37:19 compute-0 podman[257579]: 2025-12-09 16:37:19.073586724 +0000 UTC m=+0.959560823 container remove 67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:37:19 compute-0 systemd[1]: libpod-conmon-67b06e756af3b3c8549d2060378e0b71cc519a4e5d0ceb78b93f6c27d3936cd2.scope: Deactivated successfully.
Dec 09 16:37:19 compute-0 sudo[257503]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:37:19 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:37:19 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:37:19 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:37:19 compute-0 sudo[257692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:37:19 compute-0 sudo[257692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:37:19 compute-0 sudo[257692]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:20 compute-0 nova_compute[243452]: 2025-12-09 16:37:20.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:37:20 compute-0 nova_compute[243452]: 2025-12-09 16:37:20.056 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 16:37:20 compute-0 ceph-mon[75222]: pgmap v1156: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:37:20 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:37:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:22 compute-0 ceph-mon[75222]: pgmap v1157: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:23 compute-0 podman[257048]: time="2025-12-09T16:37:23Z" level=info msg="Received shutdown.Stop(), terminating!" PID=257048
Dec 09 16:37:23 compute-0 systemd[1]: podman.service: Deactivated successfully.
Dec 09 16:37:24 compute-0 ceph-mon[75222]: pgmap v1158: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:37:25
Dec 09 16:37:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:37:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:37:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data']
Dec 09 16:37:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:37:26 compute-0 ceph-mon[75222]: pgmap v1159: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:37:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:37:28 compute-0 ceph-mon[75222]: pgmap v1160: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:30 compute-0 ceph-mon[75222]: pgmap v1161: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:30 compute-0 podman[257717]: 2025-12-09 16:37:30.628445549 +0000 UTC m=+0.066572409 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 09 16:37:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:30 compute-0 sudo[257738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Dec 09 16:37:30 compute-0 sudo[257738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:30 compute-0 sudo[257738]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:30 compute-0 sudo[257763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Dec 09 16:37:30 compute-0 sudo[257763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:37:30 compute-0 sudo[257763]: pam_unix(sudo:session): session closed for user root
Dec 09 16:37:32 compute-0 ceph-mon[75222]: pgmap v1162: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:34 compute-0 ceph-mon[75222]: pgmap v1163: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:35 compute-0 sshd-session[256814]: Connection closed by 192.168.122.30 port 44956
Dec 09 16:37:35 compute-0 sshd-session[256811]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:37:35 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec 09 16:37:35 compute-0 systemd[1]: session-54.scope: Consumed 1.330s CPU time.
Dec 09 16:37:35 compute-0 systemd-logind[786]: Session 54 logged out. Waiting for processes to exit.
Dec 09 16:37:35 compute-0 systemd-logind[786]: Removed session 54.
Dec 09 16:37:35 compute-0 sshd-session[256575]: Connection closed by 192.168.122.30 port 59966
Dec 09 16:37:35 compute-0 sshd-session[256572]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:37:35 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 09 16:37:35 compute-0 systemd-logind[786]: Session 53 logged out. Waiting for processes to exit.
Dec 09 16:37:35 compute-0 systemd-logind[786]: Removed session 53.
Dec 09 16:37:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:36 compute-0 ceph-mon[75222]: pgmap v1164: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:36 compute-0 sshd-session[257047]: Connection closed by 192.168.122.30 port 44970
Dec 09 16:37:36 compute-0 sshd-session[257044]: pam_unix(sshd:session): session closed for user zuul
Dec 09 16:37:36 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec 09 16:37:36 compute-0 systemd-logind[786]: Session 55 logged out. Waiting for processes to exit.
Dec 09 16:37:36 compute-0 systemd-logind[786]: Removed session 55.
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.427166042984902e-07 of space, bias 1.0, pg target 0.00025281498128954704 quantized to 32 (current 32)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7749663169956723e-06 of space, bias 4.0, pg target 0.0021299595803948067 quantized to 16 (current 16)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:37:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:37:38 compute-0 ceph-mon[75222]: pgmap v1165: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:39 compute-0 ceph-mon[75222]: pgmap v1166: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:41 compute-0 ceph-mon[75222]: pgmap v1167: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:43 compute-0 ceph-mon[75222]: pgmap v1168: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:44 compute-0 podman[257789]: 2025-12-09 16:37:44.621236864 +0000 UTC m=+0.051307489 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 16:37:44 compute-0 podman[257788]: 2025-12-09 16:37:44.655858211 +0000 UTC m=+0.089238569 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 09 16:37:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:45 compute-0 ceph-mon[75222]: pgmap v1169: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:47 compute-0 ceph-mon[75222]: pgmap v1170: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:49 compute-0 sshd-session[257835]: Invalid user test from 146.190.31.45 port 40034
Dec 09 16:37:49 compute-0 sshd-session[257835]: Connection closed by invalid user test 146.190.31.45 port 40034 [preauth]
Dec 09 16:37:49 compute-0 ceph-mon[75222]: pgmap v1171: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:51 compute-0 ceph-mon[75222]: pgmap v1172: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:53 compute-0 ceph-mon[75222]: pgmap v1173: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:55 compute-0 ceph-mon[75222]: pgmap v1174: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:55.964907) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298275964962, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1522, "num_deletes": 505, "total_data_size": 1947859, "memory_usage": 1985920, "flush_reason": "Manual Compaction"}
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298275973822, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1410565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23087, "largest_seqno": 24608, "table_properties": {"data_size": 1404880, "index_size": 2441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16215, "raw_average_key_size": 18, "raw_value_size": 1390953, "raw_average_value_size": 1628, "num_data_blocks": 111, "num_entries": 854, "num_filter_entries": 854, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765298148, "oldest_key_time": 1765298148, "file_creation_time": 1765298275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8950 microseconds, and 3892 cpu microseconds.
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:55.973861) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1410565 bytes OK
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:55.973877) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:55.975385) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:55.975425) EVENT_LOG_v1 {"time_micros": 1765298275975417, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:55.975447) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1940132, prev total WAL file size 1940132, number of live WAL files 2.
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:55.976203) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1377KB)], [53(9095KB)]
Dec 09 16:37:55 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298275976247, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10724821, "oldest_snapshot_seqno": -1}
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4645 keys, 7796280 bytes, temperature: kUnknown
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298276027262, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7796280, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7764647, "index_size": 18915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 115833, "raw_average_key_size": 24, "raw_value_size": 7680001, "raw_average_value_size": 1653, "num_data_blocks": 787, "num_entries": 4645, "num_filter_entries": 4645, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765298275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:56.027570) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7796280 bytes
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:56.029047) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.8 rd, 152.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.9 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(13.1) write-amplify(5.5) OK, records in: 5630, records dropped: 985 output_compression: NoCompression
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:56.029076) EVENT_LOG_v1 {"time_micros": 1765298276029062, "job": 28, "event": "compaction_finished", "compaction_time_micros": 51108, "compaction_time_cpu_micros": 16450, "output_level": 6, "num_output_files": 1, "total_output_size": 7796280, "num_input_records": 5630, "num_output_records": 4645, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298276029647, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298276032651, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:55.976101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:56.032782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:56.032789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:56.032792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:56.032795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:37:56 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:37:56.032798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:37:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:37:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:37:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:37:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:37:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:37:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:37:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:57 compute-0 ceph-mon[75222]: pgmap v1175: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:37:59 compute-0 ceph-mon[75222]: pgmap v1176: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:01 compute-0 podman[257837]: 2025-12-09 16:38:01.627999678 +0000 UTC m=+0.067487895 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:38:01 compute-0 ceph-mon[75222]: pgmap v1177: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:02 compute-0 nova_compute[243452]: 2025-12-09 16:38:02.183 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:02 compute-0 nova_compute[243452]: 2025-12-09 16:38:02.504 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:38:02 compute-0 nova_compute[243452]: 2025-12-09 16:38:02.504 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:38:02 compute-0 nova_compute[243452]: 2025-12-09 16:38:02.504 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:38:02 compute-0 nova_compute[243452]: 2025-12-09 16:38:02.505 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:38:02 compute-0 nova_compute[243452]: 2025-12-09 16:38:02.505 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:38:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:03 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:38:03 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2226959078' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.059 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.232 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.233 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.234 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.234 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.442 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.442 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.518 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing inventories for resource provider ca130087-db63-46e1-b278-a80bb66e6865 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.594 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Updating ProviderTree inventory for provider ca130087-db63-46e1-b278-a80bb66e6865 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.595 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Updating inventory in ProviderTree for provider ca130087-db63-46e1-b278-a80bb66e6865 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.611 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing aggregate associations for resource provider ca130087-db63-46e1-b278-a80bb66e6865, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.643 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing trait associations for resource provider ca130087-db63-46e1-b278-a80bb66e6865, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 16:38:03 compute-0 nova_compute[243452]: 2025-12-09 16:38:03.664 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:38:03 compute-0 ceph-mon[75222]: pgmap v1178: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:03 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2226959078' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:38:04 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:38:04 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305005275' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:38:04 compute-0 nova_compute[243452]: 2025-12-09 16:38:04.240 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:38:04 compute-0 nova_compute[243452]: 2025-12-09 16:38:04.249 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:38:04 compute-0 nova_compute[243452]: 2025-12-09 16:38:04.274 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:38:04 compute-0 nova_compute[243452]: 2025-12-09 16:38:04.277 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:38:04 compute-0 nova_compute[243452]: 2025-12-09 16:38:04.277 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:38:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:05 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/305005275' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:38:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:06 compute-0 ceph-mon[75222]: pgmap v1179: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:07 compute-0 nova_compute[243452]: 2025-12-09 16:38:07.148 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:07 compute-0 nova_compute[243452]: 2025-12-09 16:38:07.149 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:38:07 compute-0 nova_compute[243452]: 2025-12-09 16:38:07.149 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:38:07 compute-0 nova_compute[243452]: 2025-12-09 16:38:07.168 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:38:07 compute-0 nova_compute[243452]: 2025-12-09 16:38:07.168 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:07 compute-0 nova_compute[243452]: 2025-12-09 16:38:07.168 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:08 compute-0 nova_compute[243452]: 2025-12-09 16:38:08.066 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:08 compute-0 ceph-mon[75222]: pgmap v1180: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:09 compute-0 nova_compute[243452]: 2025-12-09 16:38:09.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:09 compute-0 nova_compute[243452]: 2025-12-09 16:38:09.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:38:10 compute-0 nova_compute[243452]: 2025-12-09 16:38:10.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:10 compute-0 ceph-mon[75222]: pgmap v1181: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:38:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3586294660' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:38:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:38:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3586294660' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:38:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:11 compute-0 nova_compute[243452]: 2025-12-09 16:38:11.048 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:11 compute-0 nova_compute[243452]: 2025-12-09 16:38:11.068 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:11 compute-0 nova_compute[243452]: 2025-12-09 16:38:11.068 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:38:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3586294660' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:38:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3586294660' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:38:12 compute-0 ceph-mon[75222]: pgmap v1182: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:14 compute-0 ceph-mon[75222]: pgmap v1183: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:15 compute-0 podman[257903]: 2025-12-09 16:38:15.622373052 +0000 UTC m=+0.069359479 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 16:38:15 compute-0 podman[257902]: 2025-12-09 16:38:15.634084973 +0000 UTC m=+0.080843043 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 09 16:38:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:16 compute-0 ceph-mon[75222]: pgmap v1184: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:38:17.858 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:38:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:38:17.859 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:38:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:38:17.859 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:38:18 compute-0 ceph-mon[75222]: pgmap v1185: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:19 compute-0 sudo[257947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:38:19 compute-0 sudo[257947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:19 compute-0 sudo[257947]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:19 compute-0 sudo[257972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 09 16:38:19 compute-0 sudo[257972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:19 compute-0 podman[258040]: 2025-12-09 16:38:19.720223358 +0000 UTC m=+0.069782250 container exec 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:38:19 compute-0 podman[258040]: 2025-12-09 16:38:19.80921017 +0000 UTC m=+0.158769032 container exec_died 9ce3cdfc68db4310535ef64a87efb40353dcdfbbac71cac592072bd903c643f6 (image=quay.io/ceph/ceph:v20, name=ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:38:20 compute-0 ceph-mon[75222]: pgmap v1186: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:20 compute-0 sudo[257972]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:38:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:38:20 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:20 compute-0 sudo[258231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:38:20 compute-0 sudo[258231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:20 compute-0 sudo[258231]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:20 compute-0 sudo[258256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:38:20 compute-0 sudo[258256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:21 compute-0 sudo[258256]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:38:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:38:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:38:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:38:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:38:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:38:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:38:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:38:21 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:38:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:38:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:38:21 compute-0 sudo[258311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:38:21 compute-0 sudo[258311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:21 compute-0 sudo[258311]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:21 compute-0 sudo[258336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:38:21 compute-0 sudo[258336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:21 compute-0 ceph-mon[75222]: pgmap v1187: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:38:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:38:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:38:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:38:21 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:38:21 compute-0 podman[258374]: 2025-12-09 16:38:21.719401203 +0000 UTC m=+0.035710639 container create 4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:38:21 compute-0 systemd[1]: Started libpod-conmon-4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc.scope.
Dec 09 16:38:21 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:38:21 compute-0 podman[258374]: 2025-12-09 16:38:21.791458837 +0000 UTC m=+0.107768293 container init 4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 16:38:21 compute-0 podman[258374]: 2025-12-09 16:38:21.799054271 +0000 UTC m=+0.115363707 container start 4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 09 16:38:21 compute-0 podman[258374]: 2025-12-09 16:38:21.703866425 +0000 UTC m=+0.020175881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:38:21 compute-0 podman[258374]: 2025-12-09 16:38:21.802220081 +0000 UTC m=+0.118529537 container attach 4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 09 16:38:21 compute-0 xenodochial_engelbart[258391]: 167 167
Dec 09 16:38:21 compute-0 systemd[1]: libpod-4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc.scope: Deactivated successfully.
Dec 09 16:38:21 compute-0 podman[258374]: 2025-12-09 16:38:21.804023652 +0000 UTC m=+0.120333118 container died 4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bdd331c9424adb280e7e9b329b0ddb7c49f18c45a3bf6771c17c334cc067222-merged.mount: Deactivated successfully.
Dec 09 16:38:21 compute-0 podman[258374]: 2025-12-09 16:38:21.848305292 +0000 UTC m=+0.164614728 container remove 4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:38:21 compute-0 systemd[1]: libpod-conmon-4934f44bc40afcc1d16b87d6937340db3df65bc4a576468d2e6d0c4fb64b51cc.scope: Deactivated successfully.
Dec 09 16:38:22 compute-0 podman[258417]: 2025-12-09 16:38:22.020967785 +0000 UTC m=+0.047779480 container create 567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:38:22 compute-0 systemd[1]: Started libpod-conmon-567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d.scope.
Dec 09 16:38:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:38:22 compute-0 podman[258417]: 2025-12-09 16:38:21.999967672 +0000 UTC m=+0.026779377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3cbcd86863d11f76c29c6d851f3fee2f4359b7512dccbbb8d6a3bee7b33ea6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3cbcd86863d11f76c29c6d851f3fee2f4359b7512dccbbb8d6a3bee7b33ea6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3cbcd86863d11f76c29c6d851f3fee2f4359b7512dccbbb8d6a3bee7b33ea6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3cbcd86863d11f76c29c6d851f3fee2f4359b7512dccbbb8d6a3bee7b33ea6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3cbcd86863d11f76c29c6d851f3fee2f4359b7512dccbbb8d6a3bee7b33ea6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:22 compute-0 podman[258417]: 2025-12-09 16:38:22.11113087 +0000 UTC m=+0.137942555 container init 567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:38:22 compute-0 podman[258417]: 2025-12-09 16:38:22.11893872 +0000 UTC m=+0.145750405 container start 567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:38:22 compute-0 podman[258417]: 2025-12-09 16:38:22.122528981 +0000 UTC m=+0.149340666 container attach 567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:38:22 compute-0 admiring_burnell[258433]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:38:22 compute-0 admiring_burnell[258433]: --> All data devices are unavailable
Dec 09 16:38:22 compute-0 systemd[1]: libpod-567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d.scope: Deactivated successfully.
Dec 09 16:38:22 compute-0 podman[258417]: 2025-12-09 16:38:22.650385748 +0000 UTC m=+0.677197433 container died 567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a3cbcd86863d11f76c29c6d851f3fee2f4359b7512dccbbb8d6a3bee7b33ea6-merged.mount: Deactivated successfully.
Dec 09 16:38:22 compute-0 podman[258417]: 2025-12-09 16:38:22.713642444 +0000 UTC m=+0.740454149 container remove 567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:38:22 compute-0 systemd[1]: libpod-conmon-567f83a8b187761cc0a8c3610297fa3711fc00ce8b2cf7533721907a5655887d.scope: Deactivated successfully.
Dec 09 16:38:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:22 compute-0 sudo[258336]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:22 compute-0 sudo[258467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:38:22 compute-0 sudo[258467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:22 compute-0 sudo[258467]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:22 compute-0 sudo[258492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:38:22 compute-0 sudo[258492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:23 compute-0 podman[258529]: 2025-12-09 16:38:23.1330159 +0000 UTC m=+0.019641195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:38:23 compute-0 podman[258529]: 2025-12-09 16:38:23.322043205 +0000 UTC m=+0.208668480 container create 5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_morse, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:38:23 compute-0 systemd[1]: Started libpod-conmon-5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3.scope.
Dec 09 16:38:23 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:38:23 compute-0 podman[258529]: 2025-12-09 16:38:23.497532398 +0000 UTC m=+0.384157693 container init 5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_morse, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 09 16:38:23 compute-0 podman[258529]: 2025-12-09 16:38:23.506032478 +0000 UTC m=+0.392657763 container start 5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 16:38:23 compute-0 thirsty_morse[258545]: 167 167
Dec 09 16:38:23 compute-0 podman[258529]: 2025-12-09 16:38:23.511135962 +0000 UTC m=+0.397761237 container attach 5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:38:23 compute-0 systemd[1]: libpod-5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3.scope: Deactivated successfully.
Dec 09 16:38:23 compute-0 podman[258529]: 2025-12-09 16:38:23.511925964 +0000 UTC m=+0.398551249 container died 5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a71fbe1a964506bc656ce88bcf2272a310fb16a9c073e31a9b05d8016e427f1-merged.mount: Deactivated successfully.
Dec 09 16:38:23 compute-0 podman[258529]: 2025-12-09 16:38:23.550369299 +0000 UTC m=+0.436994574 container remove 5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_morse, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:38:23 compute-0 systemd[1]: libpod-conmon-5bfa9cf00c77993139a4497052183039204588e88e58ef8da3392b4475383ab3.scope: Deactivated successfully.
Dec 09 16:38:23 compute-0 podman[258570]: 2025-12-09 16:38:23.697896553 +0000 UTC m=+0.040455863 container create 9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_boyd, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:38:23 compute-0 systemd[1]: Started libpod-conmon-9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38.scope.
Dec 09 16:38:23 compute-0 podman[258570]: 2025-12-09 16:38:23.681010147 +0000 UTC m=+0.023569487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:38:23 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef63be31616ca9d997911cbe49b578d5b9079e4f7dc0e72ba64bfc59670aeba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef63be31616ca9d997911cbe49b578d5b9079e4f7dc0e72ba64bfc59670aeba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef63be31616ca9d997911cbe49b578d5b9079e4f7dc0e72ba64bfc59670aeba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef63be31616ca9d997911cbe49b578d5b9079e4f7dc0e72ba64bfc59670aeba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:23 compute-0 podman[258570]: 2025-12-09 16:38:23.794432528 +0000 UTC m=+0.136991878 container init 9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:38:23 compute-0 podman[258570]: 2025-12-09 16:38:23.802926727 +0000 UTC m=+0.145486037 container start 9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_boyd, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:38:23 compute-0 podman[258570]: 2025-12-09 16:38:23.80727872 +0000 UTC m=+0.149838070 container attach 9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_boyd, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:38:23 compute-0 ceph-mon[75222]: pgmap v1188: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:24 compute-0 agitated_boyd[258587]: {
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:     "0": [
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:         {
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "devices": [
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "/dev/loop3"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             ],
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_name": "ceph_lv0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_size": "21470642176",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "name": "ceph_lv0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "tags": {
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cluster_name": "ceph",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.crush_device_class": "",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.encrypted": "0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.objectstore": "bluestore",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osd_id": "0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.type": "block",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.vdo": "0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.with_tpm": "0"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             },
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "type": "block",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "vg_name": "ceph_vg0"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:         }
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:     ],
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:     "1": [
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:         {
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "devices": [
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "/dev/loop4"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             ],
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_name": "ceph_lv1",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_size": "21470642176",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "name": "ceph_lv1",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "tags": {
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cluster_name": "ceph",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.crush_device_class": "",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.encrypted": "0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.objectstore": "bluestore",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osd_id": "1",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.type": "block",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.vdo": "0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.with_tpm": "0"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             },
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "type": "block",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "vg_name": "ceph_vg1"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:         }
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:     ],
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:     "2": [
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:         {
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "devices": [
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "/dev/loop5"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             ],
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_name": "ceph_lv2",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_size": "21470642176",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "name": "ceph_lv2",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "tags": {
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.cluster_name": "ceph",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.crush_device_class": "",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.encrypted": "0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.objectstore": "bluestore",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osd_id": "2",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.type": "block",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.vdo": "0",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:                 "ceph.with_tpm": "0"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             },
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "type": "block",
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:             "vg_name": "ceph_vg2"
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:         }
Dec 09 16:38:24 compute-0 agitated_boyd[258587]:     ]
Dec 09 16:38:24 compute-0 agitated_boyd[258587]: }
Dec 09 16:38:24 compute-0 systemd[1]: libpod-9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38.scope: Deactivated successfully.
Dec 09 16:38:24 compute-0 podman[258570]: 2025-12-09 16:38:24.120405118 +0000 UTC m=+0.462964428 container died 9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef63be31616ca9d997911cbe49b578d5b9079e4f7dc0e72ba64bfc59670aeba6-merged.mount: Deactivated successfully.
Dec 09 16:38:24 compute-0 podman[258570]: 2025-12-09 16:38:24.273475848 +0000 UTC m=+0.616035158 container remove 9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:38:24 compute-0 systemd[1]: libpod-conmon-9a48b42c0e9fe89bfdcb861e34ddc2180b9ff94b95c7f640917fc43502fbea38.scope: Deactivated successfully.
Dec 09 16:38:24 compute-0 sudo[258492]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:24 compute-0 sudo[258610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:38:24 compute-0 sudo[258610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:24 compute-0 sudo[258610]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:24 compute-0 sudo[258635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:38:24 compute-0 sudo[258635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:24 compute-0 podman[258672]: 2025-12-09 16:38:24.73590781 +0000 UTC m=+0.038571439 container create bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cerf, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:38:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:24 compute-0 systemd[1]: Started libpod-conmon-bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439.scope.
Dec 09 16:38:24 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:38:24 compute-0 podman[258672]: 2025-12-09 16:38:24.804372312 +0000 UTC m=+0.107035961 container init bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cerf, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:38:24 compute-0 podman[258672]: 2025-12-09 16:38:24.810454014 +0000 UTC m=+0.113117643 container start bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:38:24 compute-0 podman[258672]: 2025-12-09 16:38:24.81351143 +0000 UTC m=+0.116175059 container attach bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 16:38:24 compute-0 bold_cerf[258688]: 167 167
Dec 09 16:38:24 compute-0 systemd[1]: libpod-bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439.scope: Deactivated successfully.
Dec 09 16:38:24 compute-0 podman[258672]: 2025-12-09 16:38:24.720546797 +0000 UTC m=+0.023210456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:38:24 compute-0 conmon[258688]: conmon bd146aa61cfa6b5f52d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439.scope/container/memory.events
Dec 09 16:38:24 compute-0 podman[258672]: 2025-12-09 16:38:24.815790955 +0000 UTC m=+0.118454614 container died bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2515379ab34cd5dc829a1d527dd4d6463fccb694cab2727026fdd548e9c5c4e-merged.mount: Deactivated successfully.
Dec 09 16:38:24 compute-0 podman[258672]: 2025-12-09 16:38:24.847608663 +0000 UTC m=+0.150272292 container remove bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cerf, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 09 16:38:24 compute-0 systemd[1]: libpod-conmon-bd146aa61cfa6b5f52d31c44e331035e65a03772bb084e5d5cdbbd56ca302439.scope: Deactivated successfully.
Dec 09 16:38:25 compute-0 podman[258711]: 2025-12-09 16:38:25.007417323 +0000 UTC m=+0.035050280 container create fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_easley, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:38:25 compute-0 systemd[1]: Started libpod-conmon-fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc.scope.
Dec 09 16:38:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6193aeed99331355649527a05a88cc70ab84c170019bcd7b3d758ed88c3a5cc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6193aeed99331355649527a05a88cc70ab84c170019bcd7b3d758ed88c3a5cc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6193aeed99331355649527a05a88cc70ab84c170019bcd7b3d758ed88c3a5cc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6193aeed99331355649527a05a88cc70ab84c170019bcd7b3d758ed88c3a5cc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:38:25 compute-0 podman[258711]: 2025-12-09 16:38:25.080711612 +0000 UTC m=+0.108344599 container init fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:38:25 compute-0 podman[258711]: 2025-12-09 16:38:25.08738864 +0000 UTC m=+0.115021597 container start fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_easley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:38:25 compute-0 podman[258711]: 2025-12-09 16:38:24.992401549 +0000 UTC m=+0.020034526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:38:25 compute-0 podman[258711]: 2025-12-09 16:38:25.090743485 +0000 UTC m=+0.118376442 container attach fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_easley, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 16:38:25 compute-0 lvm[258807]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:38:25 compute-0 lvm[258807]: VG ceph_vg1 finished
Dec 09 16:38:25 compute-0 lvm[258806]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:38:25 compute-0 lvm[258806]: VG ceph_vg0 finished
Dec 09 16:38:25 compute-0 lvm[258809]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:38:25 compute-0 lvm[258809]: VG ceph_vg2 finished
Dec 09 16:38:25 compute-0 magical_easley[258728]: {}
Dec 09 16:38:25 compute-0 ceph-mon[75222]: pgmap v1189: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:25 compute-0 systemd[1]: libpod-fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc.scope: Deactivated successfully.
Dec 09 16:38:25 compute-0 systemd[1]: libpod-fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc.scope: Consumed 1.226s CPU time.
Dec 09 16:38:25 compute-0 podman[258711]: 2025-12-09 16:38:25.859338848 +0000 UTC m=+0.886971845 container died fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_easley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6193aeed99331355649527a05a88cc70ab84c170019bcd7b3d758ed88c3a5cc0-merged.mount: Deactivated successfully.
Dec 09 16:38:25 compute-0 podman[258711]: 2025-12-09 16:38:25.901878489 +0000 UTC m=+0.929511456 container remove fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_easley, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 16:38:25 compute-0 systemd[1]: libpod-conmon-fd8b2e3fccde76af62867ce9f9c7eabcaba6db1102f0824f8e95dec52be343cc.scope: Deactivated successfully.
Dec 09 16:38:25 compute-0 sudo[258635]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:38:25 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:38:25 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:38:25
Dec 09 16:38:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:38:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:38:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
Dec 09 16:38:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:38:26 compute-0 sudo[258824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:38:26 compute-0 sudo[258824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:38:26 compute-0 sudo[258824]: pam_unix(sudo:session): session closed for user root
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:38:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:38:26 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:26 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:38:27 compute-0 ceph-mon[75222]: pgmap v1190: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:30 compute-0 ceph-mon[75222]: pgmap v1191: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:32 compute-0 sshd-session[258849]: Invalid user oracle from 146.190.31.45 port 43536
Dec 09 16:38:32 compute-0 ceph-mon[75222]: pgmap v1192: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:32 compute-0 sshd-session[258849]: Connection closed by invalid user oracle 146.190.31.45 port 43536 [preauth]
Dec 09 16:38:32 compute-0 podman[258851]: 2025-12-09 16:38:32.25833602 +0000 UTC m=+0.083540889 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 16:38:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:34 compute-0 ceph-mon[75222]: pgmap v1193: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:36 compute-0 ceph-mon[75222]: pgmap v1194: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.427166042984902e-07 of space, bias 1.0, pg target 0.00025281498128954704 quantized to 32 (current 32)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7749663169956723e-06 of space, bias 4.0, pg target 0.0021299595803948067 quantized to 16 (current 16)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:38:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:38:38 compute-0 ceph-mon[75222]: pgmap v1195: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:39 compute-0 ceph-mon[75222]: pgmap v1196: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:42 compute-0 ceph-mon[75222]: pgmap v1197: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:43 compute-0 ceph-mon[75222]: pgmap v1198: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:45 compute-0 ceph-mon[75222]: pgmap v1199: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:46 compute-0 podman[258873]: 2025-12-09 16:38:46.629502048 +0000 UTC m=+0.064191073 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 16:38:46 compute-0 podman[258872]: 2025-12-09 16:38:46.678708777 +0000 UTC m=+0.116159460 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:38:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:47 compute-0 ceph-mon[75222]: pgmap v1200: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:49 compute-0 ceph-mon[75222]: pgmap v1201: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:52 compute-0 ceph-mon[75222]: pgmap v1202: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:54 compute-0 ceph-mon[75222]: pgmap v1203: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:38:56 compute-0 ceph-mon[75222]: pgmap v1204: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:38:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:38:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:38:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:38:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:38:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:38:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:58 compute-0 ceph-mon[75222]: pgmap v1205: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:38:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:00 compute-0 ceph-mon[75222]: pgmap v1206: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:02 compute-0 ceph-mon[75222]: pgmap v1207: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:02 compute-0 podman[258917]: 2025-12-09 16:39:02.650215601 +0000 UTC m=+0.089822517 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202)
Dec 09 16:39:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:04 compute-0 nova_compute[243452]: 2025-12-09 16:39:04.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:04 compute-0 ceph-mon[75222]: pgmap v1208: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:04 compute-0 nova_compute[243452]: 2025-12-09 16:39:04.811 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:39:04 compute-0 nova_compute[243452]: 2025-12-09 16:39:04.812 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:39:04 compute-0 nova_compute[243452]: 2025-12-09 16:39:04.812 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:39:04 compute-0 nova_compute[243452]: 2025-12-09 16:39:04.812 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:39:04 compute-0 nova_compute[243452]: 2025-12-09 16:39:04.813 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:39:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:39:05 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2648593747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:39:05 compute-0 nova_compute[243452]: 2025-12-09 16:39:05.318 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:39:05 compute-0 nova_compute[243452]: 2025-12-09 16:39:05.463 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:39:05 compute-0 nova_compute[243452]: 2025-12-09 16:39:05.464 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:39:05 compute-0 nova_compute[243452]: 2025-12-09 16:39:05.464 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:39:05 compute-0 nova_compute[243452]: 2025-12-09 16:39:05.465 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:39:05 compute-0 nova_compute[243452]: 2025-12-09 16:39:05.534 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:39:05 compute-0 nova_compute[243452]: 2025-12-09 16:39:05.534 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:39:05 compute-0 nova_compute[243452]: 2025-12-09 16:39:05.588 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:39:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:06 compute-0 ceph-mon[75222]: pgmap v1209: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:06 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2648593747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:39:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:39:06 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767934075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:39:06 compute-0 nova_compute[243452]: 2025-12-09 16:39:06.112 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:39:06 compute-0 nova_compute[243452]: 2025-12-09 16:39:06.118 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:39:06 compute-0 nova_compute[243452]: 2025-12-09 16:39:06.185 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:39:06 compute-0 nova_compute[243452]: 2025-12-09 16:39:06.187 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:39:06 compute-0 nova_compute[243452]: 2025-12-09 16:39:06.187 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:39:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:07 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/767934075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:39:07 compute-0 nova_compute[243452]: 2025-12-09 16:39:07.188 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:07 compute-0 nova_compute[243452]: 2025-12-09 16:39:07.189 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:39:07 compute-0 nova_compute[243452]: 2025-12-09 16:39:07.189 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:39:07 compute-0 nova_compute[243452]: 2025-12-09 16:39:07.202 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:39:07 compute-0 nova_compute[243452]: 2025-12-09 16:39:07.202 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:07 compute-0 nova_compute[243452]: 2025-12-09 16:39:07.203 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:08 compute-0 ceph-mon[75222]: pgmap v1210: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:10 compute-0 nova_compute[243452]: 2025-12-09 16:39:10.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:10 compute-0 nova_compute[243452]: 2025-12-09 16:39:10.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:10 compute-0 nova_compute[243452]: 2025-12-09 16:39:10.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:39:10 compute-0 ceph-mon[75222]: pgmap v1211: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:39:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2421416651' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:39:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:39:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2421416651' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:39:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:11 compute-0 nova_compute[243452]: 2025-12-09 16:39:11.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:11 compute-0 nova_compute[243452]: 2025-12-09 16:39:11.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2421416651' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:39:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2421416651' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:39:12 compute-0 nova_compute[243452]: 2025-12-09 16:39:12.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:39:12 compute-0 ceph-mon[75222]: pgmap v1212: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:14 compute-0 ceph-mon[75222]: pgmap v1213: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:15 compute-0 sshd-session[258983]: Invalid user oracle from 146.190.31.45 port 54974
Dec 09 16:39:15 compute-0 sshd-session[258983]: Connection closed by invalid user oracle 146.190.31.45 port 54974 [preauth]
Dec 09 16:39:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:16 compute-0 ceph-mon[75222]: pgmap v1214: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:17 compute-0 podman[258986]: 2025-12-09 16:39:17.636627515 +0000 UTC m=+0.067251339 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:39:17 compute-0 podman[258985]: 2025-12-09 16:39:17.639610819 +0000 UTC m=+0.085712350 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 16:39:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:39:17.859 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:39:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:39:17.860 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:39:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:39:17.860 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:39:18 compute-0 ceph-mon[75222]: pgmap v1215: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:20 compute-0 ceph-mon[75222]: pgmap v1216: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:22 compute-0 ceph-mon[75222]: pgmap v1217: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:24 compute-0 ceph-mon[75222]: pgmap v1218: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:39:25
Dec 09 16:39:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:39:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:39:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes']
Dec 09 16:39:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:39:26 compute-0 sudo[259030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:39:26 compute-0 sudo[259030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:26 compute-0 sudo[259030]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:26 compute-0 sudo[259055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:39:26 compute-0 sudo[259055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:26 compute-0 ceph-mon[75222]: pgmap v1219: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:39:26 compute-0 sudo[259055]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:39:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:39:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:39:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:39:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:39:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:39:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:39:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:39:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:39:26 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:39:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:39:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:26 compute-0 sudo[259110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:39:26 compute-0 sudo[259110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:26 compute-0 sudo[259110]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:39:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:39:26 compute-0 sudo[259135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:39:26 compute-0 sudo[259135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:27 compute-0 podman[259172]: 2025-12-09 16:39:27.159503205 +0000 UTC m=+0.049922420 container create 9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tesla, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:39:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:39:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:39:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:39:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:39:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:39:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:39:27 compute-0 systemd[1]: Started libpod-conmon-9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464.scope.
Dec 09 16:39:27 compute-0 podman[259172]: 2025-12-09 16:39:27.137644808 +0000 UTC m=+0.028064053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:39:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:39:27 compute-0 podman[259172]: 2025-12-09 16:39:27.272391341 +0000 UTC m=+0.162810606 container init 9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:39:27 compute-0 podman[259172]: 2025-12-09 16:39:27.285262904 +0000 UTC m=+0.175682149 container start 9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tesla, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:39:27 compute-0 podman[259172]: 2025-12-09 16:39:27.289544725 +0000 UTC m=+0.179964040 container attach 9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tesla, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:39:27 compute-0 zealous_tesla[259188]: 167 167
Dec 09 16:39:27 compute-0 systemd[1]: libpod-9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464.scope: Deactivated successfully.
Dec 09 16:39:27 compute-0 podman[259172]: 2025-12-09 16:39:27.292819497 +0000 UTC m=+0.183238752 container died 9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tesla, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f547a6c12b5d14e886b80755c53b28b534336c456604c2c41d9930b3ab877eac-merged.mount: Deactivated successfully.
Dec 09 16:39:27 compute-0 podman[259172]: 2025-12-09 16:39:27.337954311 +0000 UTC m=+0.228373576 container remove 9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tesla, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:39:27 compute-0 systemd[1]: libpod-conmon-9a63348add37e1b8a7fe2afe89eca88dd5d22ed9eacf06cb1566946515334464.scope: Deactivated successfully.
Dec 09 16:39:27 compute-0 podman[259213]: 2025-12-09 16:39:27.533456309 +0000 UTC m=+0.047915993 container create db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_golick, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 16:39:27 compute-0 systemd[1]: Started libpod-conmon-db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef.scope.
Dec 09 16:39:27 compute-0 podman[259213]: 2025-12-09 16:39:27.509356239 +0000 UTC m=+0.023815963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:39:27 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a8916404cf4ce1969b4f4ad5b36c32d0c9478d1d2fa509ccd3628ab290e2ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a8916404cf4ce1969b4f4ad5b36c32d0c9478d1d2fa509ccd3628ab290e2ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a8916404cf4ce1969b4f4ad5b36c32d0c9478d1d2fa509ccd3628ab290e2ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a8916404cf4ce1969b4f4ad5b36c32d0c9478d1d2fa509ccd3628ab290e2ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a8916404cf4ce1969b4f4ad5b36c32d0c9478d1d2fa509ccd3628ab290e2ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:27 compute-0 podman[259213]: 2025-12-09 16:39:27.624275672 +0000 UTC m=+0.138735356 container init db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:39:27 compute-0 podman[259213]: 2025-12-09 16:39:27.638493084 +0000 UTC m=+0.152952768 container start db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 09 16:39:27 compute-0 podman[259213]: 2025-12-09 16:39:27.642369283 +0000 UTC m=+0.156828967 container attach db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:39:28 compute-0 bold_golick[259229]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:39:28 compute-0 bold_golick[259229]: --> All data devices are unavailable
Dec 09 16:39:28 compute-0 systemd[1]: libpod-db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef.scope: Deactivated successfully.
Dec 09 16:39:28 compute-0 podman[259213]: 2025-12-09 16:39:28.144519216 +0000 UTC m=+0.658978900 container died db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a8916404cf4ce1969b4f4ad5b36c32d0c9478d1d2fa509ccd3628ab290e2ea-merged.mount: Deactivated successfully.
Dec 09 16:39:28 compute-0 ceph-mon[75222]: pgmap v1220: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:28 compute-0 podman[259213]: 2025-12-09 16:39:28.192255003 +0000 UTC m=+0.706714697 container remove db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:39:28 compute-0 systemd[1]: libpod-conmon-db4997bd7f3fef80ca50c95a62de6675b0e23b963a198c9d7a3be6c209826fef.scope: Deactivated successfully.
Dec 09 16:39:28 compute-0 sudo[259135]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:28 compute-0 sudo[259260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:39:28 compute-0 sudo[259260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:28 compute-0 sudo[259260]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:28 compute-0 sudo[259285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:39:28 compute-0 sudo[259285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:28 compute-0 podman[259322]: 2025-12-09 16:39:28.645500976 +0000 UTC m=+0.038058256 container create 52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:39:28 compute-0 systemd[1]: Started libpod-conmon-52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55.scope.
Dec 09 16:39:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:39:28 compute-0 podman[259322]: 2025-12-09 16:39:28.724267939 +0000 UTC m=+0.116825269 container init 52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 16:39:28 compute-0 podman[259322]: 2025-12-09 16:39:28.630945125 +0000 UTC m=+0.023502425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:39:28 compute-0 podman[259322]: 2025-12-09 16:39:28.732399328 +0000 UTC m=+0.124956608 container start 52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 09 16:39:28 compute-0 podman[259322]: 2025-12-09 16:39:28.735971549 +0000 UTC m=+0.128528829 container attach 52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:39:28 compute-0 wonderful_mclaren[259339]: 167 167
Dec 09 16:39:28 compute-0 systemd[1]: libpod-52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55.scope: Deactivated successfully.
Dec 09 16:39:28 compute-0 podman[259322]: 2025-12-09 16:39:28.739448407 +0000 UTC m=+0.132005687 container died 52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 16:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-df59623435ae0418dd0c8c2d16b83543b84e65964356da4e84fd49e939b1b09a-merged.mount: Deactivated successfully.
Dec 09 16:39:28 compute-0 podman[259322]: 2025-12-09 16:39:28.77745778 +0000 UTC m=+0.170015060 container remove 52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:39:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:28 compute-0 systemd[1]: libpod-conmon-52a141b42685d48a23708cb7072ef0b7f2df640ee3bc57c89bff3b1eb863ff55.scope: Deactivated successfully.
Dec 09 16:39:28 compute-0 podman[259362]: 2025-12-09 16:39:28.954195118 +0000 UTC m=+0.047087330 container create cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:39:29 compute-0 systemd[1]: Started libpod-conmon-cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631.scope.
Dec 09 16:39:29 compute-0 podman[259362]: 2025-12-09 16:39:28.930016376 +0000 UTC m=+0.022908608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:39:29 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8400d93734e3063d0061f5f97ae8b6d6fd88dfd5eb4fbaf9d538524bf992a95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8400d93734e3063d0061f5f97ae8b6d6fd88dfd5eb4fbaf9d538524bf992a95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8400d93734e3063d0061f5f97ae8b6d6fd88dfd5eb4fbaf9d538524bf992a95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8400d93734e3063d0061f5f97ae8b6d6fd88dfd5eb4fbaf9d538524bf992a95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:29 compute-0 podman[259362]: 2025-12-09 16:39:29.041901934 +0000 UTC m=+0.134794136 container init cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:39:29 compute-0 podman[259362]: 2025-12-09 16:39:29.055054885 +0000 UTC m=+0.147947057 container start cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:39:29 compute-0 podman[259362]: 2025-12-09 16:39:29.058654806 +0000 UTC m=+0.151546998 container attach cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hermann, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:39:29 compute-0 musing_hermann[259378]: {
Dec 09 16:39:29 compute-0 musing_hermann[259378]:     "0": [
Dec 09 16:39:29 compute-0 musing_hermann[259378]:         {
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "devices": [
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "/dev/loop3"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             ],
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_name": "ceph_lv0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_size": "21470642176",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "name": "ceph_lv0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "tags": {
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cluster_name": "ceph",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.crush_device_class": "",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.encrypted": "0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.objectstore": "bluestore",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osd_id": "0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.type": "block",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.vdo": "0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.with_tpm": "0"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             },
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "type": "block",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "vg_name": "ceph_vg0"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:         }
Dec 09 16:39:29 compute-0 musing_hermann[259378]:     ],
Dec 09 16:39:29 compute-0 musing_hermann[259378]:     "1": [
Dec 09 16:39:29 compute-0 musing_hermann[259378]:         {
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "devices": [
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "/dev/loop4"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             ],
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_name": "ceph_lv1",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_size": "21470642176",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "name": "ceph_lv1",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "tags": {
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cluster_name": "ceph",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.crush_device_class": "",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.encrypted": "0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.objectstore": "bluestore",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osd_id": "1",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.type": "block",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.vdo": "0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.with_tpm": "0"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             },
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "type": "block",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "vg_name": "ceph_vg1"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:         }
Dec 09 16:39:29 compute-0 musing_hermann[259378]:     ],
Dec 09 16:39:29 compute-0 musing_hermann[259378]:     "2": [
Dec 09 16:39:29 compute-0 musing_hermann[259378]:         {
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "devices": [
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "/dev/loop5"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             ],
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_name": "ceph_lv2",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_size": "21470642176",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "name": "ceph_lv2",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "tags": {
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.cluster_name": "ceph",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.crush_device_class": "",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.encrypted": "0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.objectstore": "bluestore",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osd_id": "2",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.type": "block",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.vdo": "0",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:                 "ceph.with_tpm": "0"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             },
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "type": "block",
Dec 09 16:39:29 compute-0 musing_hermann[259378]:             "vg_name": "ceph_vg2"
Dec 09 16:39:29 compute-0 musing_hermann[259378]:         }
Dec 09 16:39:29 compute-0 musing_hermann[259378]:     ]
Dec 09 16:39:29 compute-0 musing_hermann[259378]: }
Dec 09 16:39:29 compute-0 systemd[1]: libpod-cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631.scope: Deactivated successfully.
Dec 09 16:39:29 compute-0 podman[259362]: 2025-12-09 16:39:29.342424475 +0000 UTC m=+0.435316647 container died cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8400d93734e3063d0061f5f97ae8b6d6fd88dfd5eb4fbaf9d538524bf992a95-merged.mount: Deactivated successfully.
Dec 09 16:39:29 compute-0 podman[259362]: 2025-12-09 16:39:29.557071434 +0000 UTC m=+0.649963606 container remove cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:39:29 compute-0 sudo[259285]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:29 compute-0 systemd[1]: libpod-conmon-cb21cc648d9ded93e44d92a4d5dd162d0e92e815e9e538738b1d62f293774631.scope: Deactivated successfully.
Dec 09 16:39:29 compute-0 sudo[259399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:39:29 compute-0 sudo[259399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:29 compute-0 sudo[259399]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:29 compute-0 sudo[259424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:39:29 compute-0 sudo[259424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:30 compute-0 podman[259461]: 2025-12-09 16:39:30.055628325 +0000 UTC m=+0.051276008 container create ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:39:30 compute-0 systemd[1]: Started libpod-conmon-ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883.scope.
Dec 09 16:39:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:39:30 compute-0 podman[259461]: 2025-12-09 16:39:30.124825148 +0000 UTC m=+0.120472861 container init ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_buck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:39:30 compute-0 podman[259461]: 2025-12-09 16:39:30.034438787 +0000 UTC m=+0.030086530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:39:30 compute-0 podman[259461]: 2025-12-09 16:39:30.132543816 +0000 UTC m=+0.128191499 container start ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_buck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:39:30 compute-0 epic_buck[259477]: 167 167
Dec 09 16:39:30 compute-0 podman[259461]: 2025-12-09 16:39:30.135667584 +0000 UTC m=+0.131315297 container attach ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:39:30 compute-0 systemd[1]: libpod-ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883.scope: Deactivated successfully.
Dec 09 16:39:30 compute-0 podman[259461]: 2025-12-09 16:39:30.137092604 +0000 UTC m=+0.132740307 container died ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_buck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6174913e4a924e7b197e8fde0c50da49a422a5fdf9d0f111fea99f0eb06b6c4-merged.mount: Deactivated successfully.
Dec 09 16:39:30 compute-0 podman[259461]: 2025-12-09 16:39:30.174046217 +0000 UTC m=+0.169693910 container remove ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_buck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:39:30 compute-0 systemd[1]: libpod-conmon-ca61be0383ca4bb69dc7955878958f700811af30648b1e25a10e70c4c37b1883.scope: Deactivated successfully.
Dec 09 16:39:30 compute-0 ceph-mon[75222]: pgmap v1221: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:30 compute-0 podman[259501]: 2025-12-09 16:39:30.366962672 +0000 UTC m=+0.066966791 container create bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 09 16:39:30 compute-0 systemd[1]: Started libpod-conmon-bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734.scope.
Dec 09 16:39:30 compute-0 podman[259501]: 2025-12-09 16:39:30.340062583 +0000 UTC m=+0.040066682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:39:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a3d1aee01c3554673d41902408d34338d6091ca31090ae2c6ff4fa229f9614/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a3d1aee01c3554673d41902408d34338d6091ca31090ae2c6ff4fa229f9614/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a3d1aee01c3554673d41902408d34338d6091ca31090ae2c6ff4fa229f9614/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a3d1aee01c3554673d41902408d34338d6091ca31090ae2c6ff4fa229f9614/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:39:30 compute-0 podman[259501]: 2025-12-09 16:39:30.468545319 +0000 UTC m=+0.168549448 container init bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:39:30 compute-0 podman[259501]: 2025-12-09 16:39:30.485797826 +0000 UTC m=+0.185801895 container start bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:39:30 compute-0 podman[259501]: 2025-12-09 16:39:30.490468598 +0000 UTC m=+0.190472707 container attach bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 09 16:39:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:31 compute-0 lvm[259596]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:39:31 compute-0 lvm[259598]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:39:31 compute-0 lvm[259596]: VG ceph_vg0 finished
Dec 09 16:39:31 compute-0 lvm[259598]: VG ceph_vg1 finished
Dec 09 16:39:31 compute-0 lvm[259599]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:39:31 compute-0 lvm[259599]: VG ceph_vg2 finished
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.206628) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298371206688, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1025, "num_deletes": 251, "total_data_size": 1480767, "memory_usage": 1504448, "flush_reason": "Manual Compaction"}
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298371216235, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1455508, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24609, "largest_seqno": 25633, "table_properties": {"data_size": 1450480, "index_size": 2552, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10831, "raw_average_key_size": 19, "raw_value_size": 1440421, "raw_average_value_size": 2618, "num_data_blocks": 114, "num_entries": 550, "num_filter_entries": 550, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765298276, "oldest_key_time": 1765298276, "file_creation_time": 1765298371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 9669 microseconds, and 4349 cpu microseconds.
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.216305) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1455508 bytes OK
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.216325) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.217536) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.217549) EVENT_LOG_v1 {"time_micros": 1765298371217544, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.217566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1475933, prev total WAL file size 1475933, number of live WAL files 2.
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.218117) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1421KB)], [56(7613KB)]
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298371218147, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9251788, "oldest_snapshot_seqno": -1}
Dec 09 16:39:31 compute-0 zen_brattain[259518]: {}
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4681 keys, 7499916 bytes, temperature: kUnknown
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298371257307, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7499916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7468463, "index_size": 18671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 117237, "raw_average_key_size": 25, "raw_value_size": 7383518, "raw_average_value_size": 1577, "num_data_blocks": 770, "num_entries": 4681, "num_filter_entries": 4681, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765298371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.257506) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7499916 bytes
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.258982) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.9 rd, 191.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.4 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(11.5) write-amplify(5.2) OK, records in: 5195, records dropped: 514 output_compression: NoCompression
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.259004) EVENT_LOG_v1 {"time_micros": 1765298371258994, "job": 30, "event": "compaction_finished", "compaction_time_micros": 39222, "compaction_time_cpu_micros": 16628, "output_level": 6, "num_output_files": 1, "total_output_size": 7499916, "num_input_records": 5195, "num_output_records": 4681, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298371259536, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298371260955, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.218041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.261055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.261060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.261062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.261064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:39:31 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:39:31.261066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:39:31 compute-0 systemd[1]: libpod-bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734.scope: Deactivated successfully.
Dec 09 16:39:31 compute-0 podman[259501]: 2025-12-09 16:39:31.286253827 +0000 UTC m=+0.986257876 container died bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:39:31 compute-0 systemd[1]: libpod-bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734.scope: Consumed 1.345s CPU time.
Dec 09 16:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-55a3d1aee01c3554673d41902408d34338d6091ca31090ae2c6ff4fa229f9614-merged.mount: Deactivated successfully.
Dec 09 16:39:31 compute-0 podman[259501]: 2025-12-09 16:39:31.333983264 +0000 UTC m=+1.033987303 container remove bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:39:31 compute-0 systemd[1]: libpod-conmon-bec2e1d8c66229d3bfbb907f450bcce6d1b0a36850613a26e61bafe1758a3734.scope: Deactivated successfully.
Dec 09 16:39:31 compute-0 sudo[259424]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:39:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:39:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:39:31 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:39:31 compute-0 sudo[259616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:39:31 compute-0 sudo[259616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:39:31 compute-0 sudo[259616]: pam_unix(sudo:session): session closed for user root
Dec 09 16:39:32 compute-0 ceph-mon[75222]: pgmap v1222: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:39:32 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:39:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:33 compute-0 podman[259641]: 2025-12-09 16:39:33.640092321 +0000 UTC m=+0.076892541 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 09 16:39:34 compute-0 ceph-mon[75222]: pgmap v1223: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:36 compute-0 ceph-mon[75222]: pgmap v1224: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.427166042984902e-07 of space, bias 1.0, pg target 0.00025281498128954704 quantized to 32 (current 32)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7749663169956723e-06 of space, bias 4.0, pg target 0.0021299595803948067 quantized to 16 (current 16)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:39:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:39:38 compute-0 ceph-mon[75222]: pgmap v1225: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:40 compute-0 ceph-mon[75222]: pgmap v1226: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:42 compute-0 ceph-mon[75222]: pgmap v1227: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:43 compute-0 ceph-mon[75222]: pgmap v1228: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:45 compute-0 ceph-mon[75222]: pgmap v1229: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:47 compute-0 ceph-mon[75222]: pgmap v1230: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:48 compute-0 podman[259663]: 2025-12-09 16:39:48.666297134 +0000 UTC m=+0.098921863 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 09 16:39:48 compute-0 podman[259662]: 2025-12-09 16:39:48.682177052 +0000 UTC m=+0.118860395 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 16:39:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:49 compute-0 ceph-mon[75222]: pgmap v1231: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:51 compute-0 ceph-mon[75222]: pgmap v1232: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:53 compute-0 ceph-mon[75222]: pgmap v1233: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:55 compute-0 ceph-mon[75222]: pgmap v1234: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:39:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:39:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:39:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:39:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:39:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:39:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:39:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:57 compute-0 ceph-mon[75222]: pgmap v1235: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:39:59 compute-0 ceph-mon[75222]: pgmap v1236: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:00 compute-0 sshd-session[259710]: Invalid user oracle from 146.190.31.45 port 33706
Dec 09 16:40:00 compute-0 sshd-session[259710]: Connection closed by invalid user oracle 146.190.31.45 port 33706 [preauth]
Dec 09 16:40:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:01 compute-0 ceph-mon[75222]: pgmap v1237: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:03 compute-0 ceph-mon[75222]: pgmap v1238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:04 compute-0 podman[259712]: 2025-12-09 16:40:04.614524655 +0000 UTC m=+0.061023441 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 09 16:40:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:05 compute-0 ceph-mon[75222]: pgmap v1239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:06 compute-0 nova_compute[243452]: 2025-12-09 16:40:06.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:06 compute-0 nova_compute[243452]: 2025-12-09 16:40:06.336 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:40:06 compute-0 nova_compute[243452]: 2025-12-09 16:40:06.336 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:40:06 compute-0 nova_compute[243452]: 2025-12-09 16:40:06.336 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:40:06 compute-0 nova_compute[243452]: 2025-12-09 16:40:06.337 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:40:06 compute-0 nova_compute[243452]: 2025-12-09 16:40:06.337 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:40:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:40:06 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287211444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:40:06 compute-0 nova_compute[243452]: 2025-12-09 16:40:06.882 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:40:06 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/287211444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.065 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.066 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.067 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.067 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.290 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.291 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.311 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:40:07 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:40:07 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249196937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.813 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.818 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.880 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.882 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:40:07 compute-0 nova_compute[243452]: 2025-12-09 16:40:07.882 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:40:07 compute-0 ceph-mon[75222]: pgmap v1240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:07 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4249196937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:40:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:08 compute-0 nova_compute[243452]: 2025-12-09 16:40:08.883 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:08 compute-0 nova_compute[243452]: 2025-12-09 16:40:08.883 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:40:08 compute-0 nova_compute[243452]: 2025-12-09 16:40:08.884 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:40:08 compute-0 nova_compute[243452]: 2025-12-09 16:40:08.901 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:40:08 compute-0 nova_compute[243452]: 2025-12-09 16:40:08.901 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:09 compute-0 nova_compute[243452]: 2025-12-09 16:40:09.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:09 compute-0 ceph-mon[75222]: pgmap v1241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:40:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3415747299' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:40:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:40:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3415747299' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:40:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3415747299' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:40:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3415747299' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:40:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:11 compute-0 nova_compute[243452]: 2025-12-09 16:40:11.047 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:11 compute-0 nova_compute[243452]: 2025-12-09 16:40:11.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:11 compute-0 nova_compute[243452]: 2025-12-09 16:40:11.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:40:11 compute-0 ceph-mon[75222]: pgmap v1242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:12 compute-0 nova_compute[243452]: 2025-12-09 16:40:12.047 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:13 compute-0 nova_compute[243452]: 2025-12-09 16:40:13.607 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:13 compute-0 nova_compute[243452]: 2025-12-09 16:40:13.608 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:13 compute-0 ceph-mon[75222]: pgmap v1243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:14 compute-0 nova_compute[243452]: 2025-12-09 16:40:14.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:40:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:15 compute-0 ceph-mon[75222]: pgmap v1244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:40:17.860 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:40:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:40:17.860 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:40:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:40:17.860 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:40:17 compute-0 ceph-mon[75222]: pgmap v1245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:19 compute-0 podman[259780]: 2025-12-09 16:40:19.62168331 +0000 UTC m=+0.068217005 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 09 16:40:19 compute-0 podman[259779]: 2025-12-09 16:40:19.683546144 +0000 UTC m=+0.122198576 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 09 16:40:20 compute-0 ceph-mon[75222]: pgmap v1246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:22 compute-0 ceph-mon[75222]: pgmap v1247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:22 compute-0 sshd-session[259824]: Invalid user admin from 45.148.10.121 port 45764
Dec 09 16:40:22 compute-0 sshd-session[259824]: Connection closed by invalid user admin 45.148.10.121 port 45764 [preauth]
Dec 09 16:40:24 compute-0 ceph-mon[75222]: pgmap v1248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:40:25
Dec 09 16:40:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:40:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:40:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'images']
Dec 09 16:40:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:40:26 compute-0 ceph-mon[75222]: pgmap v1249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:40:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:40:28 compute-0 ceph-mon[75222]: pgmap v1250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:30 compute-0 ceph-mon[75222]: pgmap v1251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:31 compute-0 sudo[259826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:40:31 compute-0 sudo[259826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:31 compute-0 sudo[259826]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:31 compute-0 sudo[259851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:40:31 compute-0 sudo[259851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:32 compute-0 ceph-mon[75222]: pgmap v1252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:32 compute-0 sudo[259851]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:40:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:40:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:40:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:40:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:40:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:40:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:40:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:40:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:40:32 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:40:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:40:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:40:32 compute-0 sudo[259907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:40:32 compute-0 sudo[259907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:32 compute-0 sudo[259907]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:32 compute-0 sudo[259932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:40:32 compute-0 sudo[259932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:32 compute-0 podman[259968]: 2025-12-09 16:40:32.708783914 +0000 UTC m=+0.043191736 container create f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:40:32 compute-0 systemd[1]: Started libpod-conmon-f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0.scope.
Dec 09 16:40:32 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:40:32 compute-0 podman[259968]: 2025-12-09 16:40:32.691469973 +0000 UTC m=+0.025877815 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:40:32 compute-0 podman[259968]: 2025-12-09 16:40:32.791785357 +0000 UTC m=+0.126193199 container init f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:40:32 compute-0 podman[259968]: 2025-12-09 16:40:32.798501088 +0000 UTC m=+0.132908910 container start f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:40:32 compute-0 podman[259968]: 2025-12-09 16:40:32.801810452 +0000 UTC m=+0.136218284 container attach f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:40:32 compute-0 romantic_mccarthy[259985]: 167 167
Dec 09 16:40:32 compute-0 systemd[1]: libpod-f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0.scope: Deactivated successfully.
Dec 09 16:40:32 compute-0 podman[259968]: 2025-12-09 16:40:32.805179307 +0000 UTC m=+0.139587139 container died f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 09 16:40:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-659e90ef1ad4b96398d65ea6a58a7a64dbbda1874939b06d077ff8a46e571ea3-merged.mount: Deactivated successfully.
Dec 09 16:40:32 compute-0 podman[259968]: 2025-12-09 16:40:32.84548917 +0000 UTC m=+0.179897002 container remove f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:40:32 compute-0 systemd[1]: libpod-conmon-f61c0128b4addc5b1e262185836786ead3e3691d644ef5f3e4ee7236508bb8a0.scope: Deactivated successfully.
Dec 09 16:40:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:40:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:40:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:40:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:40:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:40:33 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:40:33 compute-0 podman[260010]: 2025-12-09 16:40:33.067300019 +0000 UTC m=+0.066896797 container create 68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 16:40:33 compute-0 systemd[1]: Started libpod-conmon-68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373.scope.
Dec 09 16:40:33 compute-0 podman[260010]: 2025-12-09 16:40:33.042100135 +0000 UTC m=+0.041696963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:40:33 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb24110a9d0f03f03362dabf665f2ae809b904f457fe8c07b87f63627d607cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb24110a9d0f03f03362dabf665f2ae809b904f457fe8c07b87f63627d607cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb24110a9d0f03f03362dabf665f2ae809b904f457fe8c07b87f63627d607cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb24110a9d0f03f03362dabf665f2ae809b904f457fe8c07b87f63627d607cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb24110a9d0f03f03362dabf665f2ae809b904f457fe8c07b87f63627d607cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:33 compute-0 podman[260010]: 2025-12-09 16:40:33.157360883 +0000 UTC m=+0.156957671 container init 68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 09 16:40:33 compute-0 podman[260010]: 2025-12-09 16:40:33.168380856 +0000 UTC m=+0.167977624 container start 68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:40:33 compute-0 podman[260010]: 2025-12-09 16:40:33.172218594 +0000 UTC m=+0.171815382 container attach 68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:40:33 compute-0 distracted_lumiere[260026]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:40:33 compute-0 distracted_lumiere[260026]: --> All data devices are unavailable
Dec 09 16:40:33 compute-0 systemd[1]: libpod-68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373.scope: Deactivated successfully.
Dec 09 16:40:33 compute-0 podman[260010]: 2025-12-09 16:40:33.705210057 +0000 UTC m=+0.704806835 container died 68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 16:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eb24110a9d0f03f03362dabf665f2ae809b904f457fe8c07b87f63627d607cf-merged.mount: Deactivated successfully.
Dec 09 16:40:33 compute-0 podman[260010]: 2025-12-09 16:40:33.744898452 +0000 UTC m=+0.744495240 container remove 68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:40:33 compute-0 systemd[1]: libpod-conmon-68f4268f9fb3b6c0ef8d380016c3fa8e5ef2f1ad97e77b92148873864dd6c373.scope: Deactivated successfully.
Dec 09 16:40:33 compute-0 sudo[259932]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:33 compute-0 sudo[260058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:40:33 compute-0 sudo[260058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:33 compute-0 sudo[260058]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:33 compute-0 sudo[260083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:40:33 compute-0 sudo[260083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:34 compute-0 podman[260120]: 2025-12-09 16:40:34.201307154 +0000 UTC m=+0.038704399 container create f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 09 16:40:34 compute-0 systemd[1]: Started libpod-conmon-f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf.scope.
Dec 09 16:40:34 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:40:34 compute-0 podman[260120]: 2025-12-09 16:40:34.18530644 +0000 UTC m=+0.022703715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:40:34 compute-0 podman[260120]: 2025-12-09 16:40:34.283031961 +0000 UTC m=+0.120429236 container init f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:40:34 compute-0 ceph-mon[75222]: pgmap v1253: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:34 compute-0 podman[260120]: 2025-12-09 16:40:34.290791561 +0000 UTC m=+0.128188826 container start f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 09 16:40:34 compute-0 nifty_darwin[260136]: 167 167
Dec 09 16:40:34 compute-0 podman[260120]: 2025-12-09 16:40:34.294524017 +0000 UTC m=+0.131921332 container attach f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 16:40:34 compute-0 systemd[1]: libpod-f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf.scope: Deactivated successfully.
Dec 09 16:40:34 compute-0 podman[260120]: 2025-12-09 16:40:34.295454323 +0000 UTC m=+0.132851588 container died f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-db85710e3c871def46d6306c6b2314e99a4e9cff237ffed74b77139e36de00e4-merged.mount: Deactivated successfully.
Dec 09 16:40:34 compute-0 podman[260120]: 2025-12-09 16:40:34.332931196 +0000 UTC m=+0.170328451 container remove f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:40:34 compute-0 systemd[1]: libpod-conmon-f92b40ccd8fc869227eb85501bbd17b88c8856eda300e299be68ee5da52cb7bf.scope: Deactivated successfully.
Dec 09 16:40:34 compute-0 podman[260159]: 2025-12-09 16:40:34.503487142 +0000 UTC m=+0.040258793 container create 70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_margulis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:40:34 compute-0 systemd[1]: Started libpod-conmon-70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b.scope.
Dec 09 16:40:34 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76efc80a4bac8de786406452cd11e7cc8d3017e282962738ed4675bd9707801a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76efc80a4bac8de786406452cd11e7cc8d3017e282962738ed4675bd9707801a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:34 compute-0 podman[260159]: 2025-12-09 16:40:34.485212073 +0000 UTC m=+0.021983744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76efc80a4bac8de786406452cd11e7cc8d3017e282962738ed4675bd9707801a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76efc80a4bac8de786406452cd11e7cc8d3017e282962738ed4675bd9707801a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:34 compute-0 podman[260159]: 2025-12-09 16:40:34.599176745 +0000 UTC m=+0.135948426 container init 70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:40:34 compute-0 podman[260159]: 2025-12-09 16:40:34.613755688 +0000 UTC m=+0.150527379 container start 70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:40:34 compute-0 podman[260159]: 2025-12-09 16:40:34.618553054 +0000 UTC m=+0.155324735 container attach 70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_margulis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:40:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:34 compute-0 condescending_margulis[260175]: {
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:     "0": [
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:         {
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "devices": [
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "/dev/loop3"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             ],
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_name": "ceph_lv0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_size": "21470642176",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "name": "ceph_lv0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "tags": {
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cluster_name": "ceph",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.crush_device_class": "",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.encrypted": "0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.objectstore": "bluestore",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osd_id": "0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.type": "block",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.vdo": "0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.with_tpm": "0"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             },
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "type": "block",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "vg_name": "ceph_vg0"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:         }
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:     ],
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:     "1": [
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:         {
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "devices": [
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "/dev/loop4"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             ],
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_name": "ceph_lv1",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_size": "21470642176",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "name": "ceph_lv1",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "tags": {
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cluster_name": "ceph",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.crush_device_class": "",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.encrypted": "0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.objectstore": "bluestore",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osd_id": "1",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.type": "block",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.vdo": "0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.with_tpm": "0"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             },
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "type": "block",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "vg_name": "ceph_vg1"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:         }
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:     ],
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:     "2": [
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:         {
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "devices": [
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "/dev/loop5"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             ],
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_name": "ceph_lv2",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_size": "21470642176",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "name": "ceph_lv2",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "tags": {
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.cluster_name": "ceph",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.crush_device_class": "",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.encrypted": "0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.objectstore": "bluestore",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osd_id": "2",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.type": "block",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.vdo": "0",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:                 "ceph.with_tpm": "0"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             },
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "type": "block",
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:             "vg_name": "ceph_vg2"
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:         }
Dec 09 16:40:34 compute-0 condescending_margulis[260175]:     ]
Dec 09 16:40:34 compute-0 condescending_margulis[260175]: }
Dec 09 16:40:34 compute-0 systemd[1]: libpod-70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b.scope: Deactivated successfully.
Dec 09 16:40:34 compute-0 podman[260159]: 2025-12-09 16:40:34.914819355 +0000 UTC m=+0.451591006 container died 70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_margulis, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-76efc80a4bac8de786406452cd11e7cc8d3017e282962738ed4675bd9707801a-merged.mount: Deactivated successfully.
Dec 09 16:40:34 compute-0 podman[260159]: 2025-12-09 16:40:34.95028431 +0000 UTC m=+0.487055961 container remove 70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_margulis, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:40:34 compute-0 systemd[1]: libpod-conmon-70f79777fe4284c9951f8f1d88d3777396e8f1217f39a4d3785a9a34b1b4f71b.scope: Deactivated successfully.
Dec 09 16:40:34 compute-0 sudo[260083]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:35 compute-0 podman[260184]: 2025-12-09 16:40:35.021754737 +0000 UTC m=+0.074604357 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Dec 09 16:40:35 compute-0 sudo[260212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:40:35 compute-0 sudo[260212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:35 compute-0 sudo[260212]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:35 compute-0 sudo[260238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:40:35 compute-0 sudo[260238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:35 compute-0 podman[260275]: 2025-12-09 16:40:35.385265773 +0000 UTC m=+0.041050155 container create f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:40:35 compute-0 systemd[1]: Started libpod-conmon-f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42.scope.
Dec 09 16:40:35 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:40:35 compute-0 podman[260275]: 2025-12-09 16:40:35.461448013 +0000 UTC m=+0.117232385 container init f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:40:35 compute-0 podman[260275]: 2025-12-09 16:40:35.367543141 +0000 UTC m=+0.023327533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:40:35 compute-0 podman[260275]: 2025-12-09 16:40:35.467943037 +0000 UTC m=+0.123727439 container start f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:40:35 compute-0 podman[260275]: 2025-12-09 16:40:35.472680622 +0000 UTC m=+0.128465014 container attach f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 16:40:35 compute-0 kind_darwin[260291]: 167 167
Dec 09 16:40:35 compute-0 systemd[1]: libpod-f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42.scope: Deactivated successfully.
Dec 09 16:40:35 compute-0 podman[260275]: 2025-12-09 16:40:35.475411889 +0000 UTC m=+0.131196281 container died f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 09 16:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0007d054f46b92244d4d96930aee6c2e708c9760379066fb22f711751534b2c2-merged.mount: Deactivated successfully.
Dec 09 16:40:35 compute-0 podman[260275]: 2025-12-09 16:40:35.51917312 +0000 UTC m=+0.174957532 container remove f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:40:35 compute-0 systemd[1]: libpod-conmon-f5bf23b5a581ee7b3676df5a794ad357b74e81caea78c5bb4287b2842a974e42.scope: Deactivated successfully.
Dec 09 16:40:35 compute-0 podman[260315]: 2025-12-09 16:40:35.694479691 +0000 UTC m=+0.044212385 container create b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 09 16:40:35 compute-0 systemd[1]: Started libpod-conmon-b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10.scope.
Dec 09 16:40:35 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:40:35 compute-0 podman[260315]: 2025-12-09 16:40:35.676159261 +0000 UTC m=+0.025891995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec12f544d3160546f0b0e2693359fc605239b2fc90e403e66f8d211a80beab2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec12f544d3160546f0b0e2693359fc605239b2fc90e403e66f8d211a80beab2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec12f544d3160546f0b0e2693359fc605239b2fc90e403e66f8d211a80beab2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec12f544d3160546f0b0e2693359fc605239b2fc90e403e66f8d211a80beab2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:40:35 compute-0 podman[260315]: 2025-12-09 16:40:35.796877534 +0000 UTC m=+0.146610268 container init b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_merkle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:40:35 compute-0 podman[260315]: 2025-12-09 16:40:35.805506539 +0000 UTC m=+0.155239253 container start b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 09 16:40:35 compute-0 podman[260315]: 2025-12-09 16:40:35.809256235 +0000 UTC m=+0.158988949 container attach b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_merkle, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:40:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:36 compute-0 ceph-mon[75222]: pgmap v1254: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:36 compute-0 lvm[260411]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:40:36 compute-0 lvm[260410]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:40:36 compute-0 lvm[260411]: VG ceph_vg1 finished
Dec 09 16:40:36 compute-0 lvm[260410]: VG ceph_vg0 finished
Dec 09 16:40:36 compute-0 lvm[260413]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:40:36 compute-0 lvm[260413]: VG ceph_vg2 finished
Dec 09 16:40:36 compute-0 ecstatic_merkle[260332]: {}
Dec 09 16:40:36 compute-0 systemd[1]: libpod-b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10.scope: Deactivated successfully.
Dec 09 16:40:36 compute-0 podman[260315]: 2025-12-09 16:40:36.619666354 +0000 UTC m=+0.969399128 container died b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_merkle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 09 16:40:36 compute-0 systemd[1]: libpod-b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10.scope: Consumed 1.248s CPU time.
Dec 09 16:40:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec12f544d3160546f0b0e2693359fc605239b2fc90e403e66f8d211a80beab2e-merged.mount: Deactivated successfully.
Dec 09 16:40:36 compute-0 podman[260315]: 2025-12-09 16:40:36.685185622 +0000 UTC m=+1.034918326 container remove b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:40:36 compute-0 systemd[1]: libpod-conmon-b1331aa2db994de574f99ddb6477f85a17b6ee2034c54d5dbd18a6413b41ed10.scope: Deactivated successfully.
Dec 09 16:40:36 compute-0 sudo[260238]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:40:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:40:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:40:36 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:36 compute-0 sudo[260430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:40:36 compute-0 sudo[260430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:40:36 compute-0 sudo[260430]: pam_unix(sudo:session): session closed for user root
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.427166042984902e-07 of space, bias 1.0, pg target 0.00025281498128954704 quantized to 32 (current 32)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7749663169956723e-06 of space, bias 4.0, pg target 0.0021299595803948067 quantized to 16 (current 16)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:40:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:40:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:40:37 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:40:37 compute-0 ceph-mon[75222]: pgmap v1255: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:39 compute-0 ceph-mon[75222]: pgmap v1256: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:41 compute-0 ceph-mon[75222]: pgmap v1257: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:43 compute-0 ceph-mon[75222]: pgmap v1258: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:44 compute-0 sshd-session[260455]: Invalid user oracle from 146.190.31.45 port 34226
Dec 09 16:40:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:44 compute-0 sshd-session[260455]: Connection closed by invalid user oracle 146.190.31.45 port 34226 [preauth]
Dec 09 16:40:45 compute-0 ceph-mon[75222]: pgmap v1259: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:45 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:47 compute-0 ceph-mon[75222]: pgmap v1260: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:49 compute-0 ceph-mon[75222]: pgmap v1261: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:50 compute-0 podman[260458]: 2025-12-09 16:40:50.615900554 +0000 UTC m=+0.057827711 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:40:50 compute-0 podman[260457]: 2025-12-09 16:40:50.688903144 +0000 UTC m=+0.130684687 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 09 16:40:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:50 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:51 compute-0 ceph-mon[75222]: pgmap v1262: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:53 compute-0 ceph-mon[75222]: pgmap v1263: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:55 compute-0 ceph-mon[75222]: pgmap v1264: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:55 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:40:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:40:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:40:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:40:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:40:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:40:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:40:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:57 compute-0 ceph-mon[75222]: pgmap v1265: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:40:59 compute-0 ceph-mon[75222]: pgmap v1266: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:00 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:01 compute-0 ceph-mon[75222]: pgmap v1267: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:03 compute-0 ceph-mon[75222]: pgmap v1268: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:05 compute-0 podman[260501]: 2025-12-09 16:41:05.608513901 +0000 UTC m=+0.056544704 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd)
Dec 09 16:41:05 compute-0 ceph-mon[75222]: pgmap v1269: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:05 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:07 compute-0 ceph-mon[75222]: pgmap v1270: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:08 compute-0 nova_compute[243452]: 2025-12-09 16:41:08.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:08 compute-0 nova_compute[243452]: 2025-12-09 16:41:08.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:41:08 compute-0 nova_compute[243452]: 2025-12-09 16:41:08.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:41:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:09 compute-0 nova_compute[243452]: 2025-12-09 16:41:09.767 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:41:09 compute-0 nova_compute[243452]: 2025-12-09 16:41:09.768 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:09 compute-0 nova_compute[243452]: 2025-12-09 16:41:09.768 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:09 compute-0 nova_compute[243452]: 2025-12-09 16:41:09.836 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:41:09 compute-0 nova_compute[243452]: 2025-12-09 16:41:09.837 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:41:09 compute-0 nova_compute[243452]: 2025-12-09 16:41:09.838 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:41:09 compute-0 nova_compute[243452]: 2025-12-09 16:41:09.838 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:41:09 compute-0 nova_compute[243452]: 2025-12-09 16:41:09.838 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:41:09 compute-0 ceph-mon[75222]: pgmap v1271: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:41:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1617766280' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:41:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:41:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1617766280' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:41:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:41:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892866511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:41:10 compute-0 nova_compute[243452]: 2025-12-09 16:41:10.449 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:41:10 compute-0 nova_compute[243452]: 2025-12-09 16:41:10.667 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:41:10 compute-0 nova_compute[243452]: 2025-12-09 16:41:10.668 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:41:10 compute-0 nova_compute[243452]: 2025-12-09 16:41:10.668 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:41:10 compute-0 nova_compute[243452]: 2025-12-09 16:41:10.668 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:41:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:10 compute-0 nova_compute[243452]: 2025-12-09 16:41:10.834 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:41:10 compute-0 nova_compute[243452]: 2025-12-09 16:41:10.835 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:41:10 compute-0 nova_compute[243452]: 2025-12-09 16:41:10.856 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:41:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1617766280' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:41:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/1617766280' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:41:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1892866511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:41:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:41:11 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/608340854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:41:11 compute-0 nova_compute[243452]: 2025-12-09 16:41:11.448 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:41:11 compute-0 nova_compute[243452]: 2025-12-09 16:41:11.455 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:41:11 compute-0 nova_compute[243452]: 2025-12-09 16:41:11.642 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:41:11 compute-0 nova_compute[243452]: 2025-12-09 16:41:11.644 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:41:11 compute-0 nova_compute[243452]: 2025-12-09 16:41:11.644 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:41:12 compute-0 ceph-mon[75222]: pgmap v1272: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:12 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/608340854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:41:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:12 compute-0 nova_compute[243452]: 2025-12-09 16:41:12.931 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:12 compute-0 nova_compute[243452]: 2025-12-09 16:41:12.932 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:12 compute-0 nova_compute[243452]: 2025-12-09 16:41:12.932 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:12 compute-0 nova_compute[243452]: 2025-12-09 16:41:12.932 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:41:13 compute-0 nova_compute[243452]: 2025-12-09 16:41:13.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:13 compute-0 nova_compute[243452]: 2025-12-09 16:41:13.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:14 compute-0 ceph-mon[75222]: pgmap v1273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:14 compute-0 nova_compute[243452]: 2025-12-09 16:41:14.056 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:41:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:15 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:16 compute-0 ceph-mon[75222]: pgmap v1274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:41:17.860 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:41:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:41:17.861 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:41:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:41:17.861 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:41:18 compute-0 ceph-mon[75222]: pgmap v1275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:20 compute-0 ceph-mon[75222]: pgmap v1276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:20 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:21 compute-0 podman[260565]: 2025-12-09 16:41:21.654651784 +0000 UTC m=+0.094477729 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 16:41:21 compute-0 podman[260566]: 2025-12-09 16:41:21.654994694 +0000 UTC m=+0.080993427 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 09 16:41:22 compute-0 ceph-mon[75222]: pgmap v1277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:24 compute-0 ceph-mon[75222]: pgmap v1278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:41:25
Dec 09 16:41:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:41:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:41:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control', 'vms', '.rgw.root', 'images', 'volumes']
Dec 09 16:41:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:41:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:26 compute-0 ceph-mon[75222]: pgmap v1279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:41:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:41:28 compute-0 ceph-mon[75222]: pgmap v1280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:29 compute-0 sshd-session[260610]: Invalid user oracle from 146.190.31.45 port 32780
Dec 09 16:41:29 compute-0 sshd-session[260610]: Connection closed by invalid user oracle 146.190.31.45 port 32780 [preauth]
Dec 09 16:41:30 compute-0 ceph-mon[75222]: pgmap v1281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec 09 16:41:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec 09 16:41:32 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec 09 16:41:32 compute-0 ceph-mon[75222]: pgmap v1282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:33 compute-0 ceph-mon[75222]: osdmap e150: 3 total, 3 up, 3 in
Dec 09 16:41:34 compute-0 ceph-mon[75222]: pgmap v1284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.6 MiB/s wr, 5 op/s
Dec 09 16:41:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:36 compute-0 ceph-mon[75222]: pgmap v1285: 305 pgs: 305 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.6 MiB/s wr, 5 op/s
Dec 09 16:41:36 compute-0 podman[260612]: 2025-12-09 16:41:36.660995338 +0000 UTC m=+0.103428614 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003337714792718457 of space, bias 1.0, pg target 0.10013144378155371 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7458257509363095e-06 of space, bias 4.0, pg target 0.0020949909011235713 quantized to 16 (current 16)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:41:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:41:36 compute-0 sudo[260632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:41:36 compute-0 sudo[260632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:36 compute-0 sudo[260632]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:37 compute-0 sudo[260657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:41:37 compute-0 sudo[260657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:37 compute-0 sudo[260657]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:41:37 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:41:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:41:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:41:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:41:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:41:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:41:37 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:41:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:41:37 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:41:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:41:37 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:41:37 compute-0 sudo[260713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:41:37 compute-0 sudo[260713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:37 compute-0 sudo[260713]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:37 compute-0 sudo[260738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:41:37 compute-0 sudo[260738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:38 compute-0 ceph-mon[75222]: pgmap v1286: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 09 16:41:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:41:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:41:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:41:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:41:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:41:38 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:41:38 compute-0 podman[260775]: 2025-12-09 16:41:38.287119366 +0000 UTC m=+0.050223295 container create b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:41:38 compute-0 systemd[1]: Started libpod-conmon-b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11.scope.
Dec 09 16:41:38 compute-0 podman[260775]: 2025-12-09 16:41:38.267617673 +0000 UTC m=+0.030721632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:41:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:41:38 compute-0 podman[260775]: 2025-12-09 16:41:38.380182584 +0000 UTC m=+0.143286523 container init b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:41:38 compute-0 podman[260775]: 2025-12-09 16:41:38.389070757 +0000 UTC m=+0.152174706 container start b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:41:38 compute-0 podman[260775]: 2025-12-09 16:41:38.393380599 +0000 UTC m=+0.156484558 container attach b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dirac, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 16:41:38 compute-0 gallant_dirac[260791]: 167 167
Dec 09 16:41:38 compute-0 systemd[1]: libpod-b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11.scope: Deactivated successfully.
Dec 09 16:41:38 compute-0 podman[260775]: 2025-12-09 16:41:38.396369623 +0000 UTC m=+0.159473582 container died b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 09 16:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed3248c4d9a7b105446f0e57f284798358e8e4c13777b49d065f590310e67525-merged.mount: Deactivated successfully.
Dec 09 16:41:38 compute-0 podman[260775]: 2025-12-09 16:41:38.449663645 +0000 UTC m=+0.212767604 container remove b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 16:41:38 compute-0 systemd[1]: libpod-conmon-b30d4222fab6af61da14ca7ed3027ea77ca5cf69d8b08d75aeb5a16935e6af11.scope: Deactivated successfully.
Dec 09 16:41:38 compute-0 podman[260814]: 2025-12-09 16:41:38.630330037 +0000 UTC m=+0.045554392 container create 3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:41:38 compute-0 systemd[1]: Started libpod-conmon-3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389.scope.
Dec 09 16:41:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdc9efcc373b0d3d03e0092c497bfc0e1abfaeeb24306f715043447e13b1693/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdc9efcc373b0d3d03e0092c497bfc0e1abfaeeb24306f715043447e13b1693/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdc9efcc373b0d3d03e0092c497bfc0e1abfaeeb24306f715043447e13b1693/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdc9efcc373b0d3d03e0092c497bfc0e1abfaeeb24306f715043447e13b1693/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdc9efcc373b0d3d03e0092c497bfc0e1abfaeeb24306f715043447e13b1693/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:38 compute-0 podman[260814]: 2025-12-09 16:41:38.704385567 +0000 UTC m=+0.119609912 container init 3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:41:38 compute-0 podman[260814]: 2025-12-09 16:41:38.611101902 +0000 UTC m=+0.026326287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:41:38 compute-0 podman[260814]: 2025-12-09 16:41:38.715788001 +0000 UTC m=+0.131012356 container start 3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:41:38 compute-0 podman[260814]: 2025-12-09 16:41:38.719253979 +0000 UTC m=+0.134478334 container attach 3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 09 16:41:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 09 16:41:39 compute-0 heuristic_snyder[260830]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:41:39 compute-0 heuristic_snyder[260830]: --> All data devices are unavailable
Dec 09 16:41:39 compute-0 systemd[1]: libpod-3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389.scope: Deactivated successfully.
Dec 09 16:41:39 compute-0 podman[260814]: 2025-12-09 16:41:39.22011089 +0000 UTC m=+0.635335255 container died 3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 09 16:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecdc9efcc373b0d3d03e0092c497bfc0e1abfaeeb24306f715043447e13b1693-merged.mount: Deactivated successfully.
Dec 09 16:41:39 compute-0 podman[260814]: 2025-12-09 16:41:39.284866837 +0000 UTC m=+0.700091202 container remove 3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:41:39 compute-0 systemd[1]: libpod-conmon-3a58ba1e042a298ba50f079e08d50966d88255fb59908a2ca956aa87664e4389.scope: Deactivated successfully.
Dec 09 16:41:39 compute-0 sudo[260738]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:39 compute-0 sudo[260861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:41:39 compute-0 sudo[260861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:39 compute-0 sudo[260861]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:39 compute-0 sudo[260886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:41:39 compute-0 sudo[260886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:39 compute-0 podman[260921]: 2025-12-09 16:41:39.759990777 +0000 UTC m=+0.044110811 container create acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec 09 16:41:39 compute-0 systemd[1]: Started libpod-conmon-acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c.scope.
Dec 09 16:41:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:41:39 compute-0 podman[260921]: 2025-12-09 16:41:39.740996749 +0000 UTC m=+0.025116813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:41:39 compute-0 podman[260921]: 2025-12-09 16:41:39.844312118 +0000 UTC m=+0.128432192 container init acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_haslett, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:41:39 compute-0 podman[260921]: 2025-12-09 16:41:39.850770131 +0000 UTC m=+0.134890195 container start acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 09 16:41:39 compute-0 strange_haslett[260937]: 167 167
Dec 09 16:41:39 compute-0 systemd[1]: libpod-acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c.scope: Deactivated successfully.
Dec 09 16:41:39 compute-0 podman[260921]: 2025-12-09 16:41:39.855142415 +0000 UTC m=+0.139262499 container attach acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_haslett, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:41:39 compute-0 conmon[260937]: conmon acaddd7c8bf595b7fa7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c.scope/container/memory.events
Dec 09 16:41:39 compute-0 podman[260921]: 2025-12-09 16:41:39.856118723 +0000 UTC m=+0.140238757 container died acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 09 16:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f56a704804a82f075c6e00b0025e1f0d4e3d829b139b41e7e70a51c4b32fdb-merged.mount: Deactivated successfully.
Dec 09 16:41:39 compute-0 podman[260921]: 2025-12-09 16:41:39.901268893 +0000 UTC m=+0.185388937 container remove acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 16:41:39 compute-0 systemd[1]: libpod-conmon-acaddd7c8bf595b7fa7cf3c57ffdf60ac69ac5bdceb756c4a2ffcc451ab3483c.scope: Deactivated successfully.
Dec 09 16:41:40 compute-0 podman[260959]: 2025-12-09 16:41:40.097778755 +0000 UTC m=+0.047101106 container create e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:41:40 compute-0 systemd[1]: Started libpod-conmon-e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a.scope.
Dec 09 16:41:40 compute-0 ceph-mon[75222]: pgmap v1287: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 09 16:41:40 compute-0 podman[260959]: 2025-12-09 16:41:40.077230173 +0000 UTC m=+0.026552544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:41:40 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb49176fa42475ef293bfbb97d6976ce645fb0fe3359f782bd5c40c678f4838/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb49176fa42475ef293bfbb97d6976ce645fb0fe3359f782bd5c40c678f4838/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb49176fa42475ef293bfbb97d6976ce645fb0fe3359f782bd5c40c678f4838/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb49176fa42475ef293bfbb97d6976ce645fb0fe3359f782bd5c40c678f4838/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:40 compute-0 podman[260959]: 2025-12-09 16:41:40.197913495 +0000 UTC m=+0.147235846 container init e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:41:40 compute-0 podman[260959]: 2025-12-09 16:41:40.208975088 +0000 UTC m=+0.158297419 container start e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 16:41:40 compute-0 podman[260959]: 2025-12-09 16:41:40.213026973 +0000 UTC m=+0.162349344 container attach e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 09 16:41:40 compute-0 frosty_rubin[260976]: {
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:     "0": [
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:         {
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "devices": [
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "/dev/loop3"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             ],
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_name": "ceph_lv0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_size": "21470642176",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "name": "ceph_lv0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "tags": {
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cluster_name": "ceph",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.crush_device_class": "",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.encrypted": "0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.objectstore": "bluestore",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osd_id": "0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.type": "block",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.vdo": "0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.with_tpm": "0"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             },
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "type": "block",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "vg_name": "ceph_vg0"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:         }
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:     ],
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:     "1": [
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:         {
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "devices": [
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "/dev/loop4"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             ],
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_name": "ceph_lv1",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_size": "21470642176",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "name": "ceph_lv1",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "tags": {
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cluster_name": "ceph",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.crush_device_class": "",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.encrypted": "0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.objectstore": "bluestore",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osd_id": "1",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.type": "block",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.vdo": "0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.with_tpm": "0"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             },
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "type": "block",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "vg_name": "ceph_vg1"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:         }
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:     ],
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:     "2": [
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:         {
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "devices": [
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "/dev/loop5"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             ],
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_name": "ceph_lv2",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_size": "21470642176",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "name": "ceph_lv2",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "tags": {
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.cluster_name": "ceph",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.crush_device_class": "",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.encrypted": "0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.objectstore": "bluestore",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osd_id": "2",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.type": "block",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.vdo": "0",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:                 "ceph.with_tpm": "0"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             },
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "type": "block",
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:             "vg_name": "ceph_vg2"
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:         }
Dec 09 16:41:40 compute-0 frosty_rubin[260976]:     ]
Dec 09 16:41:40 compute-0 frosty_rubin[260976]: }
Dec 09 16:41:40 compute-0 systemd[1]: libpod-e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a.scope: Deactivated successfully.
Dec 09 16:41:40 compute-0 podman[260959]: 2025-12-09 16:41:40.542379982 +0000 UTC m=+0.491702303 container died e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdb49176fa42475ef293bfbb97d6976ce645fb0fe3359f782bd5c40c678f4838-merged.mount: Deactivated successfully.
Dec 09 16:41:40 compute-0 podman[260959]: 2025-12-09 16:41:40.583884579 +0000 UTC m=+0.533206910 container remove e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 16:41:40 compute-0 systemd[1]: libpod-conmon-e50cb36e4031d27a7c079252cc7e52892af8805e674600e05717eaa6c206dc3a.scope: Deactivated successfully.
Dec 09 16:41:40 compute-0 sudo[260886]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:40 compute-0 sudo[260995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:41:40 compute-0 sudo[260995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:40 compute-0 sudo[260995]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:40 compute-0 sudo[261020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:41:40 compute-0 sudo[261020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 09 16:41:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:41 compute-0 podman[261055]: 2025-12-09 16:41:41.081288632 +0000 UTC m=+0.052456558 container create 891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:41:41 compute-0 systemd[1]: Started libpod-conmon-891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f.scope.
Dec 09 16:41:41 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:41:41 compute-0 podman[261055]: 2025-12-09 16:41:41.062081028 +0000 UTC m=+0.033248974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:41:41 compute-0 podman[261055]: 2025-12-09 16:41:41.17325916 +0000 UTC m=+0.144427116 container init 891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:41:41 compute-0 podman[261055]: 2025-12-09 16:41:41.178949031 +0000 UTC m=+0.150116947 container start 891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ganguly, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 09 16:41:41 compute-0 podman[261055]: 2025-12-09 16:41:41.183378227 +0000 UTC m=+0.154546233 container attach 891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:41:41 compute-0 determined_ganguly[261072]: 167 167
Dec 09 16:41:41 compute-0 systemd[1]: libpod-891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f.scope: Deactivated successfully.
Dec 09 16:41:41 compute-0 podman[261055]: 2025-12-09 16:41:41.18593921 +0000 UTC m=+0.157107126 container died 891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 09 16:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-db47bc6b8f17e3bd81586bf52aedea9feac1de81b3a7d213fa992ee6d1c55a47-merged.mount: Deactivated successfully.
Dec 09 16:41:41 compute-0 podman[261055]: 2025-12-09 16:41:41.224982207 +0000 UTC m=+0.196150123 container remove 891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ganguly, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:41:41 compute-0 systemd[1]: libpod-conmon-891bf130c9697f1a38003df77ebed3f3ebee9e87855e3fd4cdf714219dfdad9f.scope: Deactivated successfully.
Dec 09 16:41:41 compute-0 podman[261095]: 2025-12-09 16:41:41.430863924 +0000 UTC m=+0.047695003 container create 73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:41:41 compute-0 systemd[1]: Started libpod-conmon-73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285.scope.
Dec 09 16:41:41 compute-0 podman[261095]: 2025-12-09 16:41:41.410009273 +0000 UTC m=+0.026840372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:41:41 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f21d1ba4ff77f3f47505f293c515d53ac76d96fa2abb146bb7212f0b7f43d47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f21d1ba4ff77f3f47505f293c515d53ac76d96fa2abb146bb7212f0b7f43d47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f21d1ba4ff77f3f47505f293c515d53ac76d96fa2abb146bb7212f0b7f43d47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f21d1ba4ff77f3f47505f293c515d53ac76d96fa2abb146bb7212f0b7f43d47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:41:41 compute-0 podman[261095]: 2025-12-09 16:41:41.53368016 +0000 UTC m=+0.150511319 container init 73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ptolemy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:41:41 compute-0 podman[261095]: 2025-12-09 16:41:41.547912633 +0000 UTC m=+0.164743742 container start 73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ptolemy, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:41:41 compute-0 podman[261095]: 2025-12-09 16:41:41.552785231 +0000 UTC m=+0.169616330 container attach 73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ptolemy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:41:42 compute-0 ceph-mon[75222]: pgmap v1288: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 09 16:41:42 compute-0 lvm[261189]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:41:42 compute-0 lvm[261189]: VG ceph_vg0 finished
Dec 09 16:41:42 compute-0 lvm[261192]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:41:42 compute-0 lvm[261192]: VG ceph_vg2 finished
Dec 09 16:41:42 compute-0 lvm[261191]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:41:42 compute-0 lvm[261191]: VG ceph_vg1 finished
Dec 09 16:41:42 compute-0 nostalgic_ptolemy[261111]: {}
Dec 09 16:41:42 compute-0 systemd[1]: libpod-73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285.scope: Deactivated successfully.
Dec 09 16:41:42 compute-0 podman[261095]: 2025-12-09 16:41:42.445545425 +0000 UTC m=+1.062376504 container died 73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ptolemy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 16:41:42 compute-0 systemd[1]: libpod-73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285.scope: Consumed 1.450s CPU time.
Dec 09 16:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f21d1ba4ff77f3f47505f293c515d53ac76d96fa2abb146bb7212f0b7f43d47-merged.mount: Deactivated successfully.
Dec 09 16:41:42 compute-0 podman[261095]: 2025-12-09 16:41:42.50072746 +0000 UTC m=+1.117558539 container remove 73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ptolemy, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 16:41:42 compute-0 systemd[1]: libpod-conmon-73ba72972a889066e3b0b18cb85839b66d1af0658af15c721075d691e655d285.scope: Deactivated successfully.
Dec 09 16:41:42 compute-0 sudo[261020]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:42 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:41:42 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:41:42 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:41:42 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:41:42 compute-0 sudo[261208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:41:42 compute-0 sudo[261208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:41:42 compute-0 sudo[261208]: pam_unix(sudo:session): session closed for user root
Dec 09 16:41:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.9 MiB/s wr, 11 op/s
Dec 09 16:41:43 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:41:43 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:41:43 compute-0 ceph-mon[75222]: pgmap v1289: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.9 MiB/s wr, 11 op/s
Dec 09 16:41:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Dec 09 16:41:45 compute-0 ceph-mon[75222]: pgmap v1290: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Dec 09 16:41:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 379 KiB/s wr, 5 op/s
Dec 09 16:41:47 compute-0 ceph-mon[75222]: pgmap v1291: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 379 KiB/s wr, 5 op/s
Dec 09 16:41:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:49 compute-0 ceph-mon[75222]: pgmap v1292: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:51 compute-0 ceph-mon[75222]: pgmap v1293: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:52 compute-0 podman[261234]: 2025-12-09 16:41:52.621585297 +0000 UTC m=+0.058508700 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 09 16:41:52 compute-0 podman[261233]: 2025-12-09 16:41:52.647517172 +0000 UTC m=+0.088205652 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:41:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:53 compute-0 ceph-mon[75222]: pgmap v1294: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:41:56 compute-0 ceph-mon[75222]: pgmap v1295: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:41:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:41:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:41:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:41:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:41:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:41:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:58 compute-0 ceph-mon[75222]: pgmap v1296: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:41:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:00 compute-0 ceph-mon[75222]: pgmap v1297: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:02 compute-0 ceph-mon[75222]: pgmap v1298: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:04 compute-0 ceph-mon[75222]: pgmap v1299: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:06 compute-0 ceph-mon[75222]: pgmap v1300: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:07 compute-0 podman[261278]: 2025-12-09 16:42:07.656955821 +0000 UTC m=+0.092334239 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 16:42:08 compute-0 nova_compute[243452]: 2025-12-09 16:42:08.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:08 compute-0 ceph-mon[75222]: pgmap v1301: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:08 compute-0 nova_compute[243452]: 2025-12-09 16:42:08.689 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:42:08 compute-0 nova_compute[243452]: 2025-12-09 16:42:08.689 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:42:08 compute-0 nova_compute[243452]: 2025-12-09 16:42:08.690 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:42:08 compute-0 nova_compute[243452]: 2025-12-09 16:42:08.690 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:42:08 compute-0 nova_compute[243452]: 2025-12-09 16:42:08.691 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:42:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:42:09 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637834185' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:42:09 compute-0 nova_compute[243452]: 2025-12-09 16:42:09.289 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:42:09 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3637834185' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:42:09 compute-0 nova_compute[243452]: 2025-12-09 16:42:09.448 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:42:09 compute-0 nova_compute[243452]: 2025-12-09 16:42:09.450 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5130MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:42:09 compute-0 nova_compute[243452]: 2025-12-09 16:42:09.450 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:42:09 compute-0 nova_compute[243452]: 2025-12-09 16:42:09.450 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:42:09 compute-0 nova_compute[243452]: 2025-12-09 16:42:09.532 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:42:09 compute-0 nova_compute[243452]: 2025-12-09 16:42:09.533 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:42:09 compute-0 nova_compute[243452]: 2025-12-09 16:42:09.551 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:42:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:42:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355542165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:42:10 compute-0 nova_compute[243452]: 2025-12-09 16:42:10.070 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:42:10 compute-0 nova_compute[243452]: 2025-12-09 16:42:10.075 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:42:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:42:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3729647474' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:42:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:42:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3729647474' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:42:10 compute-0 nova_compute[243452]: 2025-12-09 16:42:10.192 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:42:10 compute-0 nova_compute[243452]: 2025-12-09 16:42:10.193 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:42:10 compute-0 nova_compute[243452]: 2025-12-09 16:42:10.194 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:42:10 compute-0 ceph-mon[75222]: pgmap v1302: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3355542165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:42:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3729647474' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:42:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3729647474' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:42:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:11 compute-0 nova_compute[243452]: 2025-12-09 16:42:11.194 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:11 compute-0 nova_compute[243452]: 2025-12-09 16:42:11.194 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:11 compute-0 nova_compute[243452]: 2025-12-09 16:42:11.194 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:42:11 compute-0 nova_compute[243452]: 2025-12-09 16:42:11.194 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:42:12 compute-0 nova_compute[243452]: 2025-12-09 16:42:12.158 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:42:12 compute-0 nova_compute[243452]: 2025-12-09 16:42:12.159 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:12 compute-0 nova_compute[243452]: 2025-12-09 16:42:12.159 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:12 compute-0 nova_compute[243452]: 2025-12-09 16:42:12.159 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:42:12 compute-0 ceph-mon[75222]: pgmap v1303: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:13 compute-0 nova_compute[243452]: 2025-12-09 16:42:13.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:13 compute-0 nova_compute[243452]: 2025-12-09 16:42:13.056 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:13 compute-0 nova_compute[243452]: 2025-12-09 16:42:13.056 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:13 compute-0 sshd-session[261343]: Invalid user oracle from 146.190.31.45 port 55534
Dec 09 16:42:13 compute-0 sshd-session[261343]: Connection closed by invalid user oracle 146.190.31.45 port 55534 [preauth]
Dec 09 16:42:14 compute-0 ceph-mon[75222]: pgmap v1304: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:15 compute-0 nova_compute[243452]: 2025-12-09 16:42:15.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:15 compute-0 nova_compute[243452]: 2025-12-09 16:42:15.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:16 compute-0 ceph-mon[75222]: pgmap v1305: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:17 compute-0 nova_compute[243452]: 2025-12-09 16:42:17.069 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:42:17.861 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:42:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:42:17.862 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:42:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:42:17.862 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:42:18 compute-0 nova_compute[243452]: 2025-12-09 16:42:18.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:18 compute-0 nova_compute[243452]: 2025-12-09 16:42:18.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 16:42:18 compute-0 nova_compute[243452]: 2025-12-09 16:42:18.094 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 16:42:18 compute-0 ceph-mon[75222]: pgmap v1306: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:20 compute-0 ceph-mon[75222]: pgmap v1307: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:22 compute-0 ceph-mon[75222]: pgmap v1308: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:23 compute-0 podman[261345]: 2025-12-09 16:42:23.66655379 +0000 UTC m=+0.107385487 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 09 16:42:23 compute-0 podman[261362]: 2025-12-09 16:42:23.674885736 +0000 UTC m=+0.059942781 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 09 16:42:24 compute-0 ceph-mon[75222]: pgmap v1309: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:25 compute-0 nova_compute[243452]: 2025-12-09 16:42:25.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:25 compute-0 nova_compute[243452]: 2025-12-09 16:42:25.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 16:42:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:42:25
Dec 09 16:42:25 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:42:25 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:42:25 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'images', 'default.rgw.meta']
Dec 09 16:42:25 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:42:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:26 compute-0 ceph-mon[75222]: pgmap v1310: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:42:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:28 compute-0 ceph-mon[75222]: pgmap v1311: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:30 compute-0 ceph-mon[75222]: pgmap v1312: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:31 compute-0 ceph-mon[75222]: pgmap v1313: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:33 compute-0 ceph-mon[75222]: pgmap v1314: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:35 compute-0 ceph-mon[75222]: pgmap v1315: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003337714792718457 of space, bias 1.0, pg target 0.10013144378155371 quantized to 32 (current 32)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7458257509363095e-06 of space, bias 4.0, pg target 0.0020949909011235713 quantized to 16 (current 16)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:42:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:42:37 compute-0 ceph-mon[75222]: pgmap v1316: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:38 compute-0 podman[261390]: 2025-12-09 16:42:38.636905162 +0000 UTC m=+0.074108892 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 16:42:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:39 compute-0 ceph-mon[75222]: pgmap v1317: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:41 compute-0 ceph-mon[75222]: pgmap v1318: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:42 compute-0 sudo[261412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:42:42 compute-0 sudo[261412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:42 compute-0 sudo[261412]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:42 compute-0 sudo[261437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:42:42 compute-0 sudo[261437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:43 compute-0 ceph-mon[75222]: pgmap v1319: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:43 compute-0 sudo[261437]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:42:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:42:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:42:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:42:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:42:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:42:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:42:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:42:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:42:44 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:42:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:42:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:42:44 compute-0 sudo[261493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:42:44 compute-0 sudo[261493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:44 compute-0 sudo[261493]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:44 compute-0 sudo[261518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:42:44 compute-0 sudo[261518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:44 compute-0 podman[261556]: 2025-12-09 16:42:44.456770691 +0000 UTC m=+0.045749698 container create 125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_shaw, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:42:44 compute-0 systemd[1]: Started libpod-conmon-125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100.scope.
Dec 09 16:42:44 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:42:44 compute-0 podman[261556]: 2025-12-09 16:42:44.437297299 +0000 UTC m=+0.026276316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:42:44 compute-0 podman[261556]: 2025-12-09 16:42:44.534625218 +0000 UTC m=+0.123604255 container init 125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_shaw, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 16:42:44 compute-0 podman[261556]: 2025-12-09 16:42:44.543877501 +0000 UTC m=+0.132856518 container start 125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 16:42:44 compute-0 podman[261556]: 2025-12-09 16:42:44.5484514 +0000 UTC m=+0.137430397 container attach 125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_shaw, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:42:44 compute-0 dazzling_shaw[261572]: 167 167
Dec 09 16:42:44 compute-0 systemd[1]: libpod-125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100.scope: Deactivated successfully.
Dec 09 16:42:44 compute-0 conmon[261572]: conmon 125f8f748cef125e572c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100.scope/container/memory.events
Dec 09 16:42:44 compute-0 podman[261556]: 2025-12-09 16:42:44.551296671 +0000 UTC m=+0.140275658 container died 125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4f2d66673bf70360427b33863b55dab39726e8fb5fae8dc42e724271d703d69-merged.mount: Deactivated successfully.
Dec 09 16:42:44 compute-0 podman[261556]: 2025-12-09 16:42:44.596952036 +0000 UTC m=+0.185931023 container remove 125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 16:42:44 compute-0 systemd[1]: libpod-conmon-125f8f748cef125e572c8f53bff3526be421c87ab85ae173414a53de6fc2a100.scope: Deactivated successfully.
Dec 09 16:42:44 compute-0 podman[261595]: 2025-12-09 16:42:44.763419716 +0000 UTC m=+0.042708972 container create 691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_volhard, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:42:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:42:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:42:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:42:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:42:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:42:44 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:42:44 compute-0 systemd[1]: Started libpod-conmon-691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f.scope.
Dec 09 16:42:44 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34289d7ae4bfe6d7d1f98f5fd76b2d7607d13e4d603e1cdc726ee5e897f9d01c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34289d7ae4bfe6d7d1f98f5fd76b2d7607d13e4d603e1cdc726ee5e897f9d01c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34289d7ae4bfe6d7d1f98f5fd76b2d7607d13e4d603e1cdc726ee5e897f9d01c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34289d7ae4bfe6d7d1f98f5fd76b2d7607d13e4d603e1cdc726ee5e897f9d01c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34289d7ae4bfe6d7d1f98f5fd76b2d7607d13e4d603e1cdc726ee5e897f9d01c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:44 compute-0 podman[261595]: 2025-12-09 16:42:44.744794098 +0000 UTC m=+0.024083384 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:42:44 compute-0 podman[261595]: 2025-12-09 16:42:44.842899629 +0000 UTC m=+0.122189005 container init 691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_volhard, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:42:44 compute-0 podman[261595]: 2025-12-09 16:42:44.850398952 +0000 UTC m=+0.129688218 container start 691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 09 16:42:44 compute-0 podman[261595]: 2025-12-09 16:42:44.853908852 +0000 UTC m=+0.133198138 container attach 691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 09 16:42:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:45 compute-0 eloquent_volhard[261611]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:42:45 compute-0 eloquent_volhard[261611]: --> All data devices are unavailable
Dec 09 16:42:45 compute-0 systemd[1]: libpod-691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f.scope: Deactivated successfully.
Dec 09 16:42:45 compute-0 podman[261631]: 2025-12-09 16:42:45.362273236 +0000 UTC m=+0.030582778 container died 691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 09 16:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-34289d7ae4bfe6d7d1f98f5fd76b2d7607d13e4d603e1cdc726ee5e897f9d01c-merged.mount: Deactivated successfully.
Dec 09 16:42:45 compute-0 podman[261631]: 2025-12-09 16:42:45.403136155 +0000 UTC m=+0.071445607 container remove 691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 16:42:45 compute-0 systemd[1]: libpod-conmon-691ba4b685d8e29951d77f884c355fda6152781b89c17fa383a0a04c3518684f.scope: Deactivated successfully.
Dec 09 16:42:45 compute-0 sudo[261518]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:45 compute-0 sudo[261647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:42:45 compute-0 sudo[261647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:45 compute-0 sudo[261647]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:45 compute-0 sudo[261672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:42:45 compute-0 sudo[261672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:45 compute-0 ceph-mon[75222]: pgmap v1320: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:45 compute-0 podman[261708]: 2025-12-09 16:42:45.866159643 +0000 UTC m=+0.045757948 container create d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_roentgen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:42:45 compute-0 systemd[1]: Started libpod-conmon-d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa.scope.
Dec 09 16:42:45 compute-0 podman[261708]: 2025-12-09 16:42:45.845495517 +0000 UTC m=+0.025093822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:42:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:42:45 compute-0 podman[261708]: 2025-12-09 16:42:45.965890421 +0000 UTC m=+0.145488766 container init d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 09 16:42:45 compute-0 podman[261708]: 2025-12-09 16:42:45.977552292 +0000 UTC m=+0.157150597 container start d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:42:45 compute-0 podman[261708]: 2025-12-09 16:42:45.98101159 +0000 UTC m=+0.160609975 container attach d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:42:45 compute-0 pedantic_roentgen[261724]: 167 167
Dec 09 16:42:45 compute-0 systemd[1]: libpod-d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa.scope: Deactivated successfully.
Dec 09 16:42:45 compute-0 podman[261708]: 2025-12-09 16:42:45.984645703 +0000 UTC m=+0.164244028 container died d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_roentgen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f5f9ba63e5ad5397ac5c1f04f17c7d3e4b80a32cc95176d0e2a25c005f75acc-merged.mount: Deactivated successfully.
Dec 09 16:42:46 compute-0 podman[261708]: 2025-12-09 16:42:46.028460055 +0000 UTC m=+0.208058370 container remove d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 09 16:42:46 compute-0 systemd[1]: libpod-conmon-d882ccae542697864b1beb1234b27faec197890dea1dfbebd36686aa4a656aaa.scope: Deactivated successfully.
Dec 09 16:42:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:46 compute-0 podman[261746]: 2025-12-09 16:42:46.262407979 +0000 UTC m=+0.074427042 container create 589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Dec 09 16:42:46 compute-0 systemd[1]: Started libpod-conmon-589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a.scope.
Dec 09 16:42:46 compute-0 podman[261746]: 2025-12-09 16:42:46.232768748 +0000 UTC m=+0.044787871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:42:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a548f8ba5bc222c15c76efb6a44e7f99ab1b722de7181be880eb0b9a935601/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a548f8ba5bc222c15c76efb6a44e7f99ab1b722de7181be880eb0b9a935601/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a548f8ba5bc222c15c76efb6a44e7f99ab1b722de7181be880eb0b9a935601/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a548f8ba5bc222c15c76efb6a44e7f99ab1b722de7181be880eb0b9a935601/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:46 compute-0 podman[261746]: 2025-12-09 16:42:46.349486268 +0000 UTC m=+0.161505361 container init 589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:42:46 compute-0 podman[261746]: 2025-12-09 16:42:46.357966818 +0000 UTC m=+0.169985841 container start 589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shannon, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 16:42:46 compute-0 podman[261746]: 2025-12-09 16:42:46.36118946 +0000 UTC m=+0.173208523 container attach 589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:42:46 compute-0 exciting_shannon[261762]: {
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:     "0": [
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:         {
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "devices": [
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "/dev/loop3"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             ],
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_name": "ceph_lv0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_size": "21470642176",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "name": "ceph_lv0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "tags": {
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cluster_name": "ceph",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.crush_device_class": "",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.encrypted": "0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.objectstore": "bluestore",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osd_id": "0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.type": "block",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.vdo": "0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.with_tpm": "0"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             },
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "type": "block",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "vg_name": "ceph_vg0"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:         }
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:     ],
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:     "1": [
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:         {
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "devices": [
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "/dev/loop4"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             ],
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_name": "ceph_lv1",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_size": "21470642176",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "name": "ceph_lv1",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "tags": {
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cluster_name": "ceph",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.crush_device_class": "",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.encrypted": "0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.objectstore": "bluestore",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osd_id": "1",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.type": "block",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.vdo": "0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.with_tpm": "0"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             },
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "type": "block",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "vg_name": "ceph_vg1"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:         }
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:     ],
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:     "2": [
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:         {
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "devices": [
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "/dev/loop5"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             ],
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_name": "ceph_lv2",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_size": "21470642176",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "name": "ceph_lv2",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "tags": {
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.cluster_name": "ceph",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.crush_device_class": "",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.encrypted": "0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.objectstore": "bluestore",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osd_id": "2",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.type": "block",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.vdo": "0",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:                 "ceph.with_tpm": "0"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             },
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "type": "block",
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:             "vg_name": "ceph_vg2"
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:         }
Dec 09 16:42:46 compute-0 exciting_shannon[261762]:     ]
Dec 09 16:42:46 compute-0 exciting_shannon[261762]: }
Dec 09 16:42:46 compute-0 systemd[1]: libpod-589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a.scope: Deactivated successfully.
Dec 09 16:42:46 compute-0 podman[261746]: 2025-12-09 16:42:46.697391062 +0000 UTC m=+0.509410105 container died 589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 09 16:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a548f8ba5bc222c15c76efb6a44e7f99ab1b722de7181be880eb0b9a935601-merged.mount: Deactivated successfully.
Dec 09 16:42:46 compute-0 podman[261746]: 2025-12-09 16:42:46.747438051 +0000 UTC m=+0.559457094 container remove 589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 16:42:46 compute-0 systemd[1]: libpod-conmon-589d6476282dae353c5f7b623ddb99e57781f40876855d79b53194e8f94e964a.scope: Deactivated successfully.
Dec 09 16:42:46 compute-0 sudo[261672]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:46 compute-0 sudo[261782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:42:46 compute-0 sudo[261782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:46 compute-0 sudo[261782]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:46 compute-0 sudo[261807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:42:46 compute-0 sudo[261807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:47 compute-0 podman[261845]: 2025-12-09 16:42:47.2940488 +0000 UTC m=+0.064666734 container create ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:42:47 compute-0 systemd[1]: Started libpod-conmon-ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10.scope.
Dec 09 16:42:47 compute-0 podman[261845]: 2025-12-09 16:42:47.268849456 +0000 UTC m=+0.039467490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:42:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:42:47 compute-0 podman[261845]: 2025-12-09 16:42:47.388457857 +0000 UTC m=+0.159075841 container init ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 16:42:47 compute-0 podman[261845]: 2025-12-09 16:42:47.395699492 +0000 UTC m=+0.166317426 container start ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:42:47 compute-0 podman[261845]: 2025-12-09 16:42:47.399220202 +0000 UTC m=+0.169838186 container attach ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:42:47 compute-0 stoic_roentgen[261862]: 167 167
Dec 09 16:42:47 compute-0 systemd[1]: libpod-ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10.scope: Deactivated successfully.
Dec 09 16:42:47 compute-0 podman[261845]: 2025-12-09 16:42:47.401285451 +0000 UTC m=+0.171903395 container died ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-77a0106a752a6b1d1c378cfb1f88e5a9b6c5964456377e5a2c43cd0f0be074dc-merged.mount: Deactivated successfully.
Dec 09 16:42:47 compute-0 podman[261845]: 2025-12-09 16:42:47.438771194 +0000 UTC m=+0.209389128 container remove ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:42:47 compute-0 systemd[1]: libpod-conmon-ea7e97d6712a741da4febde55767b381f3bc683443db2f92d52a73e908135c10.scope: Deactivated successfully.
Dec 09 16:42:47 compute-0 podman[261885]: 2025-12-09 16:42:47.623752668 +0000 UTC m=+0.058951013 container create f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 09 16:42:47 compute-0 systemd[1]: Started libpod-conmon-f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a.scope.
Dec 09 16:42:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29ebf7feed402f8468d78d2f6e11c0ec436c90c04ae58b848bc9789d87c43a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29ebf7feed402f8468d78d2f6e11c0ec436c90c04ae58b848bc9789d87c43a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:47 compute-0 podman[261885]: 2025-12-09 16:42:47.599082598 +0000 UTC m=+0.034280973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29ebf7feed402f8468d78d2f6e11c0ec436c90c04ae58b848bc9789d87c43a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29ebf7feed402f8468d78d2f6e11c0ec436c90c04ae58b848bc9789d87c43a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:42:47 compute-0 podman[261885]: 2025-12-09 16:42:47.706241387 +0000 UTC m=+0.141439742 container init f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:42:47 compute-0 podman[261885]: 2025-12-09 16:42:47.713503163 +0000 UTC m=+0.148701498 container start f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 09 16:42:47 compute-0 podman[261885]: 2025-12-09 16:42:47.717004292 +0000 UTC m=+0.152202647 container attach f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:42:47 compute-0 ceph-mon[75222]: pgmap v1321: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:48 compute-0 lvm[261978]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:42:48 compute-0 lvm[261982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:42:48 compute-0 lvm[261982]: VG ceph_vg2 finished
Dec 09 16:42:48 compute-0 lvm[261978]: VG ceph_vg0 finished
Dec 09 16:42:48 compute-0 lvm[261981]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:42:48 compute-0 lvm[261981]: VG ceph_vg1 finished
Dec 09 16:42:48 compute-0 clever_hofstadter[261901]: {}
Dec 09 16:42:48 compute-0 systemd[1]: libpod-f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a.scope: Deactivated successfully.
Dec 09 16:42:48 compute-0 systemd[1]: libpod-f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a.scope: Consumed 1.351s CPU time.
Dec 09 16:42:48 compute-0 podman[261885]: 2025-12-09 16:42:48.564135252 +0000 UTC m=+0.999333607 container died f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 16:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c29ebf7feed402f8468d78d2f6e11c0ec436c90c04ae58b848bc9789d87c43a2-merged.mount: Deactivated successfully.
Dec 09 16:42:48 compute-0 podman[261885]: 2025-12-09 16:42:48.610589419 +0000 UTC m=+1.045787754 container remove f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:42:48 compute-0 systemd[1]: libpod-conmon-f9f98fcf914bcf212017dd3afbd3006411556975acd48e9a86e60e3f25eaba4a.scope: Deactivated successfully.
Dec 09 16:42:48 compute-0 sudo[261807]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:42:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:42:48 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:42:48 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:42:48 compute-0 sudo[261997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:42:48 compute-0 sudo[261997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:42:48 compute-0 sudo[261997]: pam_unix(sudo:session): session closed for user root
Dec 09 16:42:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:42:49 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:42:49 compute-0 ceph-mon[75222]: pgmap v1322: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec 09 16:42:51 compute-0 ceph-mon[75222]: pgmap v1323: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec 09 16:42:51 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec 09 16:42:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:52 compute-0 ceph-mon[75222]: osdmap e151: 3 total, 3 up, 3 in
Dec 09 16:42:52 compute-0 nova_compute[243452]: 2025-12-09 16:42:52.978 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:42:53 compute-0 ceph-mon[75222]: pgmap v1325: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:42:54 compute-0 podman[262023]: 2025-12-09 16:42:54.640106971 +0000 UTC m=+0.076929273 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:42:54 compute-0 podman[262022]: 2025-12-09 16:42:54.687935956 +0000 UTC m=+0.123511572 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:42:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 4.9 MiB data, 142 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 6 op/s
Dec 09 16:42:55 compute-0 sshd-session[262067]: Invalid user oracle from 146.190.31.45 port 35718
Dec 09 16:42:55 compute-0 sshd-session[262067]: Connection closed by invalid user oracle 146.190.31.45 port 35718 [preauth]
Dec 09 16:42:55 compute-0 ceph-mon[75222]: pgmap v1326: 305 pgs: 305 active+clean; 4.9 MiB data, 142 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 6 op/s
Dec 09 16:42:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:42:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:42:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:42:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:42:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:42:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:42:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:42:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 09 16:42:57 compute-0 ceph-mon[75222]: pgmap v1327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 09 16:42:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 09 16:42:59 compute-0 ceph-mon[75222]: pgmap v1328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 09 16:43:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 09 16:43:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec 09 16:43:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec 09 16:43:01 compute-0 ceph-mon[75222]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec 09 16:43:02 compute-0 ceph-mon[75222]: pgmap v1329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 09 16:43:02 compute-0 ceph-mon[75222]: osdmap e152: 3 total, 3 up, 3 in
Dec 09 16:43:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 09 16:43:04 compute-0 ceph-mon[75222]: pgmap v1331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 09 16:43:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:43:04 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6131 writes, 27K keys, 6131 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6131 writes, 6131 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1352 writes, 5846 keys, 1352 commit groups, 1.0 writes per commit group, ingest: 8.88 MB, 0.01 MB/s
                                           Interval WAL: 1352 writes, 1352 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    128.0      0.24              0.10        15    0.016       0      0       0.0       0.0
                                             L6      1/0    7.15 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    165.8    135.1      0.77              0.33        14    0.055     64K   7792       0.0       0.0
                                            Sum      1/0    7.15 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    126.3    133.4      1.01              0.42        29    0.035     64K   7792       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    149.2    148.4      0.20              0.09         6    0.033     16K   2021       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    165.8    135.1      0.77              0.33        14    0.055     64K   7792       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    130.0      0.24              0.10        14    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.06 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.0 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ad05ef58d0#2 capacity: 304.00 MB usage: 14.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000128 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(864,13.49 MB,4.43655%) FilterBlock(30,184.86 KB,0.0593838%) IndexBlock(30,345.11 KB,0.110862%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 09 16:43:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 920 B/s wr, 18 op/s
Dec 09 16:43:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:06 compute-0 ceph-mon[75222]: pgmap v1332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 920 B/s wr, 18 op/s
Dec 09 16:43:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:08 compute-0 ceph-mon[75222]: pgmap v1333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.159 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.160 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.160 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.160 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.161 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:43:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:43:08 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110128401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.700 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.876 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.877 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.877 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:43:08 compute-0 nova_compute[243452]: 2025-12-09 16:43:08.877 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:43:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.126 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.128 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:43:09 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/110128401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.193 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing inventories for resource provider ca130087-db63-46e1-b278-a80bb66e6865 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.274 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Updating ProviderTree inventory for provider ca130087-db63-46e1-b278-a80bb66e6865 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.274 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Updating inventory in ProviderTree for provider ca130087-db63-46e1-b278-a80bb66e6865 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.299 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing aggregate associations for resource provider ca130087-db63-46e1-b278-a80bb66e6865, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.328 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Refreshing trait associations for resource provider ca130087-db63-46e1-b278-a80bb66e6865, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.347 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:43:09 compute-0 podman[262111]: 2025-12-09 16:43:09.629053458 +0000 UTC m=+0.074812262 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:43:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:43:09 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033273480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.925 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.931 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.949 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.950 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:43:09 compute-0 nova_compute[243452]: 2025-12-09 16:43:09.951 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:43:10 compute-0 ceph-mon[75222]: pgmap v1334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4033273480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:43:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:43:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2144602056' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:43:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:43:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2144602056' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:43:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:10 compute-0 nova_compute[243452]: 2025-12-09 16:43:10.951 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:10 compute-0 nova_compute[243452]: 2025-12-09 16:43:10.951 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:43:10 compute-0 nova_compute[243452]: 2025-12-09 16:43:10.952 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:43:10 compute-0 nova_compute[243452]: 2025-12-09 16:43:10.995 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:43:10 compute-0 nova_compute[243452]: 2025-12-09 16:43:10.995 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2144602056' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:43:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/2144602056' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:43:12 compute-0 nova_compute[243452]: 2025-12-09 16:43:12.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:12 compute-0 nova_compute[243452]: 2025-12-09 16:43:12.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:12 compute-0 nova_compute[243452]: 2025-12-09 16:43:12.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:43:12 compute-0 ceph-mon[75222]: pgmap v1335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:14 compute-0 nova_compute[243452]: 2025-12-09 16:43:14.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:14 compute-0 nova_compute[243452]: 2025-12-09 16:43:14.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:14 compute-0 ceph-mon[75222]: pgmap v1336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:15 compute-0 nova_compute[243452]: 2025-12-09 16:43:15.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:16 compute-0 nova_compute[243452]: 2025-12-09 16:43:16.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:43:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:16 compute-0 ceph-mon[75222]: pgmap v1337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.214381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298597214431, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 252, "total_data_size": 3470779, "memory_usage": 3528448, "flush_reason": "Manual Compaction"}
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298597261865, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3403776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25634, "largest_seqno": 27692, "table_properties": {"data_size": 3394336, "index_size": 5998, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18859, "raw_average_key_size": 20, "raw_value_size": 3375527, "raw_average_value_size": 3610, "num_data_blocks": 266, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765298371, "oldest_key_time": 1765298371, "file_creation_time": 1765298597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 47557 microseconds, and 8237 cpu microseconds.
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.261932) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3403776 bytes OK
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.261966) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.277264) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.277340) EVENT_LOG_v1 {"time_micros": 1765298597277324, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.277380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3462174, prev total WAL file size 3462174, number of live WAL files 2.
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.279208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3324KB)], [59(7324KB)]
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298597279308, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10903692, "oldest_snapshot_seqno": -1}
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5096 keys, 9136585 bytes, temperature: kUnknown
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298597378194, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9136585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9100729, "index_size": 21986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 126467, "raw_average_key_size": 24, "raw_value_size": 9006854, "raw_average_value_size": 1767, "num_data_blocks": 909, "num_entries": 5096, "num_filter_entries": 5096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765296181, "oldest_key_time": 0, "file_creation_time": 1765298597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "592c5a45-08c3-40c7-974d-53c403a6ec6c", "db_session_id": "ANVSLO0IM0FKQE2HIWIF", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.378400) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9136585 bytes
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.389459) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.2 rd, 92.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5616, records dropped: 520 output_compression: NoCompression
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.389475) EVENT_LOG_v1 {"time_micros": 1765298597389468, "job": 32, "event": "compaction_finished", "compaction_time_micros": 98938, "compaction_time_cpu_micros": 39974, "output_level": 6, "num_output_files": 1, "total_output_size": 9136585, "num_input_records": 5616, "num_output_records": 5096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298597390192, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765298597391589, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.278972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.391738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.391746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.391748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.391750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:43:17 compute-0 ceph-mon[75222]: rocksdb: (Original Log Time 2025/12/09-16:43:17.391751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 09 16:43:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:43:17.863 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:43:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:43:17.864 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:43:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:43:17.864 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:43:18 compute-0 ceph-mon[75222]: pgmap v1338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:20 compute-0 ceph-mon[75222]: pgmap v1339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:22 compute-0 ceph-mon[75222]: pgmap v1340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:24 compute-0 ceph-mon[75222]: pgmap v1341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:25 compute-0 podman[262136]: 2025-12-09 16:43:25.645785252 +0000 UTC m=+0.077921201 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:43:25 compute-0 podman[262135]: 2025-12-09 16:43:25.686716482 +0000 UTC m=+0.114792536 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:43:25
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'default.rgw.log']
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:43:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:26 compute-0 ceph-mon[75222]: pgmap v1342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:43:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:28 compute-0 ceph-mon[75222]: pgmap v1343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:30 compute-0 ceph-mon[75222]: pgmap v1344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:32 compute-0 ceph-mon[75222]: pgmap v1345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:34 compute-0 ceph-mon[75222]: pgmap v1346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:36 compute-0 ceph-mon[75222]: pgmap v1347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:36 compute-0 sshd-session[262182]: Invalid user oracle from 146.190.31.45 port 50880
Dec 09 16:43:36 compute-0 sshd-session[262182]: Connection closed by invalid user oracle 146.190.31.45 port 50880 [preauth]
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 9.146598025505342e-07 of space, bias 1.0, pg target 0.0002743979407651603 quantized to 32 (current 32)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8061872431253358e-06 of space, bias 4.0, pg target 0.002167424691750403 quantized to 16 (current 16)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:43:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:43:38 compute-0 ceph-mon[75222]: pgmap v1348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:40 compute-0 ceph-mon[75222]: pgmap v1349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:40 compute-0 podman[262184]: 2025-12-09 16:43:40.636137118 +0000 UTC m=+0.082342155 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 16:43:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:42 compute-0 ceph-mon[75222]: pgmap v1350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:44 compute-0 ceph-mon[75222]: pgmap v1351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:46 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:46 compute-0 ceph-mon[75222]: pgmap v1352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:46 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:48 compute-0 ceph-mon[75222]: pgmap v1353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:48 compute-0 sudo[262205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:43:48 compute-0 sudo[262205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:48 compute-0 sudo[262205]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:48 compute-0 sudo[262230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 09 16:43:48 compute-0 sudo[262230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:48 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:49 compute-0 sudo[262230]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:43:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:43:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 16:43:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:43:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 16:43:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:43:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 16:43:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:43:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 16:43:49 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:43:49 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:43:49 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:43:49 compute-0 sudo[262287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:43:49 compute-0 sudo[262287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:49 compute-0 sudo[262287]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:49 compute-0 sudo[262312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 09 16:43:49 compute-0 sudo[262312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:49 compute-0 podman[262349]: 2025-12-09 16:43:49.880887455 +0000 UTC m=+0.061555086 container create 82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 16:43:49 compute-0 systemd[1]: Started libpod-conmon-82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11.scope.
Dec 09 16:43:49 compute-0 podman[262349]: 2025-12-09 16:43:49.850044371 +0000 UTC m=+0.030712052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:43:49 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:43:49 compute-0 podman[262349]: 2025-12-09 16:43:49.984106802 +0000 UTC m=+0.164774423 container init 82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:43:49 compute-0 podman[262349]: 2025-12-09 16:43:49.993151508 +0000 UTC m=+0.173819109 container start 82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec 09 16:43:49 compute-0 podman[262349]: 2025-12-09 16:43:49.997047719 +0000 UTC m=+0.177715530 container attach 82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hofstadter, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:43:49 compute-0 epic_hofstadter[262365]: 167 167
Dec 09 16:43:49 compute-0 systemd[1]: libpod-82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11.scope: Deactivated successfully.
Dec 09 16:43:49 compute-0 conmon[262365]: conmon 82d697f44a424ff99b6f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11.scope/container/memory.events
Dec 09 16:43:50 compute-0 podman[262349]: 2025-12-09 16:43:50.000238299 +0000 UTC m=+0.180905900 container died 82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hofstadter, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee21edc7b358fd13e17ff21a67c14f38b5083f63c747925c70b9e8a475b38722-merged.mount: Deactivated successfully.
Dec 09 16:43:50 compute-0 podman[262349]: 2025-12-09 16:43:50.043171927 +0000 UTC m=+0.223839538 container remove 82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hofstadter, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 09 16:43:50 compute-0 systemd[1]: libpod-conmon-82d697f44a424ff99b6ffecf140e0d8fbda3c4bb796953e3cfa0606268f66b11.scope: Deactivated successfully.
Dec 09 16:43:50 compute-0 podman[262389]: 2025-12-09 16:43:50.231853517 +0000 UTC m=+0.049568247 container create 27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_clarke, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:43:50 compute-0 systemd[1]: Started libpod-conmon-27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0.scope.
Dec 09 16:43:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:43:50 compute-0 podman[262389]: 2025-12-09 16:43:50.210310336 +0000 UTC m=+0.028025056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d4546066c40c799c4d4617b94e11a3d94286d819f4444ff56bcf46dd2b96db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d4546066c40c799c4d4617b94e11a3d94286d819f4444ff56bcf46dd2b96db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d4546066c40c799c4d4617b94e11a3d94286d819f4444ff56bcf46dd2b96db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d4546066c40c799c4d4617b94e11a3d94286d819f4444ff56bcf46dd2b96db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d4546066c40c799c4d4617b94e11a3d94286d819f4444ff56bcf46dd2b96db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:50 compute-0 podman[262389]: 2025-12-09 16:43:50.322311542 +0000 UTC m=+0.140026282 container init 27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_clarke, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 09 16:43:50 compute-0 podman[262389]: 2025-12-09 16:43:50.329687151 +0000 UTC m=+0.147401841 container start 27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_clarke, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:43:50 compute-0 podman[262389]: 2025-12-09 16:43:50.332928363 +0000 UTC m=+0.150643093 container attach 27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_clarke, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:43:50 compute-0 ceph-mon[75222]: pgmap v1354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:43:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 09 16:43:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:43:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 09 16:43:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 09 16:43:50 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:43:50 compute-0 epic_clarke[262405]: --> passed data devices: 0 physical, 3 LVM
Dec 09 16:43:50 compute-0 epic_clarke[262405]: --> All data devices are unavailable
Dec 09 16:43:50 compute-0 systemd[1]: libpod-27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0.scope: Deactivated successfully.
Dec 09 16:43:50 compute-0 podman[262425]: 2025-12-09 16:43:50.899045025 +0000 UTC m=+0.039820980 container died 27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_clarke, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 16:43:50 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d4546066c40c799c4d4617b94e11a3d94286d819f4444ff56bcf46dd2b96db-merged.mount: Deactivated successfully.
Dec 09 16:43:50 compute-0 podman[262425]: 2025-12-09 16:43:50.948932219 +0000 UTC m=+0.089708114 container remove 27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:43:50 compute-0 systemd[1]: libpod-conmon-27f7bc6b1b3b86e517099deaa10eaeea4f5bc4e683e13d62e59b16d065d813d0.scope: Deactivated successfully.
Dec 09 16:43:51 compute-0 sudo[262312]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:51 compute-0 sudo[262440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:43:51 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:51 compute-0 sudo[262440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:51 compute-0 sudo[262440]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:51 compute-0 sudo[262465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- lvm list --format json
Dec 09 16:43:51 compute-0 sudo[262465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:51 compute-0 podman[262502]: 2025-12-09 16:43:51.502342201 +0000 UTC m=+0.041650362 container create 139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 16:43:51 compute-0 systemd[1]: Started libpod-conmon-139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a.scope.
Dec 09 16:43:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:43:51 compute-0 podman[262502]: 2025-12-09 16:43:51.484381541 +0000 UTC m=+0.023689722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:43:51 compute-0 podman[262502]: 2025-12-09 16:43:51.586367183 +0000 UTC m=+0.125675364 container init 139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khorana, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 16:43:51 compute-0 podman[262502]: 2025-12-09 16:43:51.592132107 +0000 UTC m=+0.131440258 container start 139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khorana, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 16:43:51 compute-0 podman[262502]: 2025-12-09 16:43:51.596442459 +0000 UTC m=+0.135750730 container attach 139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khorana, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 09 16:43:51 compute-0 confident_khorana[262519]: 167 167
Dec 09 16:43:51 compute-0 systemd[1]: libpod-139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a.scope: Deactivated successfully.
Dec 09 16:43:51 compute-0 podman[262502]: 2025-12-09 16:43:51.598813906 +0000 UTC m=+0.138122077 container died 139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 16:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9feac882b4304f4d071dee41744263d952fc78d0fd22396c6df8dd4ddb5c6030-merged.mount: Deactivated successfully.
Dec 09 16:43:51 compute-0 podman[262502]: 2025-12-09 16:43:51.641583199 +0000 UTC m=+0.180891350 container remove 139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khorana, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:43:51 compute-0 systemd[1]: libpod-conmon-139c69d8e533255d47075205ff86ef418445fcd1331a3423a77f633582b1b40a.scope: Deactivated successfully.
Dec 09 16:43:51 compute-0 podman[262543]: 2025-12-09 16:43:51.834970102 +0000 UTC m=+0.047992412 container create ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chandrasekhar, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 16:43:51 compute-0 systemd[1]: Started libpod-conmon-ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8.scope.
Dec 09 16:43:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742fe3e0c5416e84ca7728f2a03be462eb79753d4f45a4688508c6788acb33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:51 compute-0 podman[262543]: 2025-12-09 16:43:51.814588044 +0000 UTC m=+0.027610364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742fe3e0c5416e84ca7728f2a03be462eb79753d4f45a4688508c6788acb33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742fe3e0c5416e84ca7728f2a03be462eb79753d4f45a4688508c6788acb33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742fe3e0c5416e84ca7728f2a03be462eb79753d4f45a4688508c6788acb33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:51 compute-0 podman[262543]: 2025-12-09 16:43:51.925497489 +0000 UTC m=+0.138519889 container init ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chandrasekhar, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 09 16:43:51 compute-0 podman[262543]: 2025-12-09 16:43:51.945907707 +0000 UTC m=+0.158930017 container start ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chandrasekhar, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 16:43:51 compute-0 podman[262543]: 2025-12-09 16:43:51.949578511 +0000 UTC m=+0.162600831 container attach ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]: {
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:     "0": [
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:         {
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "devices": [
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "/dev/loop3"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             ],
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_name": "ceph_lv0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_size": "21470642176",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5f4f01e5-fa0f-4477-b4bb-353e06b17907,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "name": "ceph_lv0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "tags": {
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.block_uuid": "BaHDug-xY6Y-aff1-E3cG-A9v7-kNod-IZev1q",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cluster_name": "ceph",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.crush_device_class": "",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.encrypted": "0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.objectstore": "bluestore",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osd_fsid": "5f4f01e5-fa0f-4477-b4bb-353e06b17907",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osd_id": "0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.type": "block",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.vdo": "0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.with_tpm": "0"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             },
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "type": "block",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "vg_name": "ceph_vg0"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:         }
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:     ],
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:     "1": [
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:         {
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "devices": [
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "/dev/loop4"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             ],
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_name": "ceph_lv1",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_size": "21470642176",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=40156d55-4083-4945-ba83-3b1dee6eabbb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "name": "ceph_lv1",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "tags": {
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.block_uuid": "6BCrYg-2FCX-cV6a-5YLO-hRwy-u4Q7-4Y6mtP",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cluster_name": "ceph",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.crush_device_class": "",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.encrypted": "0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.objectstore": "bluestore",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osd_fsid": "40156d55-4083-4945-ba83-3b1dee6eabbb",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osd_id": "1",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.type": "block",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.vdo": "0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.with_tpm": "0"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             },
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "type": "block",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "vg_name": "ceph_vg1"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:         }
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:     ],
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:     "2": [
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:         {
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "devices": [
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "/dev/loop5"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             ],
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_name": "ceph_lv2",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_size": "21470642176",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=67f67f44-54fc-54ea-8df0-10931b6ecdaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=243996ad-36e8-4855-a1e1-ac93cfca0f40,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "lv_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "name": "ceph_lv2",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "tags": {
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.block_uuid": "OaW9Qx-BdTY-xaK4-NxST-yWYV-T3eC-C7DH8o",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cluster_fsid": "67f67f44-54fc-54ea-8df0-10931b6ecdaf",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.cluster_name": "ceph",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.crush_device_class": "",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.encrypted": "0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.objectstore": "bluestore",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osd_fsid": "243996ad-36e8-4855-a1e1-ac93cfca0f40",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osd_id": "2",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.type": "block",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.vdo": "0",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:                 "ceph.with_tpm": "0"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             },
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "type": "block",
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:             "vg_name": "ceph_vg2"
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:         }
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]:     ]
Dec 09 16:43:52 compute-0 sad_chandrasekhar[262561]: }
Dec 09 16:43:52 compute-0 systemd[1]: libpod-ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8.scope: Deactivated successfully.
Dec 09 16:43:52 compute-0 podman[262543]: 2025-12-09 16:43:52.266428455 +0000 UTC m=+0.479450755 container died ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 16:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f742fe3e0c5416e84ca7728f2a03be462eb79753d4f45a4688508c6788acb33-merged.mount: Deactivated successfully.
Dec 09 16:43:52 compute-0 podman[262543]: 2025-12-09 16:43:52.316078613 +0000 UTC m=+0.529100913 container remove ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 16:43:52 compute-0 systemd[1]: libpod-conmon-ff5d29814b55d1c03116c157f041b939b9e7e59e976846514eaf45f49410b7c8.scope: Deactivated successfully.
Dec 09 16:43:52 compute-0 sudo[262465]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:52 compute-0 ceph-mon[75222]: pgmap v1355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:52 compute-0 sudo[262584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 16:43:52 compute-0 sudo[262584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:52 compute-0 sudo[262584]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:52 compute-0 sudo[262609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/67f67f44-54fc-54ea-8df0-10931b6ecdaf/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 67f67f44-54fc-54ea-8df0-10931b6ecdaf -- raw list --format json
Dec 09 16:43:52 compute-0 sudo[262609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:52 compute-0 podman[262647]: 2025-12-09 16:43:52.767083761 +0000 UTC m=+0.043893926 container create 0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_napier, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 16:43:52 compute-0 systemd[1]: Started libpod-conmon-0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024.scope.
Dec 09 16:43:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:43:52 compute-0 podman[262647]: 2025-12-09 16:43:52.747216457 +0000 UTC m=+0.024026652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:43:52 compute-0 podman[262647]: 2025-12-09 16:43:52.850577078 +0000 UTC m=+0.127387283 container init 0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_napier, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:43:52 compute-0 podman[262647]: 2025-12-09 16:43:52.864273067 +0000 UTC m=+0.141083232 container start 0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_napier, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 09 16:43:52 compute-0 podman[262647]: 2025-12-09 16:43:52.86861397 +0000 UTC m=+0.145424225 container attach 0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:43:52 compute-0 festive_napier[262663]: 167 167
Dec 09 16:43:52 compute-0 systemd[1]: libpod-0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024.scope: Deactivated successfully.
Dec 09 16:43:52 compute-0 podman[262647]: 2025-12-09 16:43:52.870695239 +0000 UTC m=+0.147505414 container died 0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_napier, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 16:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4b7a4704675278ac4cc72e984e0ed53748f48eff929438b5c89fb200315a0f-merged.mount: Deactivated successfully.
Dec 09 16:43:52 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:52 compute-0 podman[262647]: 2025-12-09 16:43:52.914984335 +0000 UTC m=+0.191794500 container remove 0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:43:52 compute-0 systemd[1]: libpod-conmon-0a86a07dd3d115489d0cc82a470b6f1986ab4e128fe3750d56eadc3d02ad4024.scope: Deactivated successfully.
Dec 09 16:43:53 compute-0 podman[262687]: 2025-12-09 16:43:53.119430902 +0000 UTC m=+0.072209659 container create f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 16:43:53 compute-0 podman[262687]: 2025-12-09 16:43:53.068491507 +0000 UTC m=+0.021270264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 09 16:43:53 compute-0 systemd[1]: Started libpod-conmon-f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701.scope.
Dec 09 16:43:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 16:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e311a7d7bf4a26b3f527aa768cefdd08366db4150db657fb872991a4ab6cd54d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e311a7d7bf4a26b3f527aa768cefdd08366db4150db657fb872991a4ab6cd54d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e311a7d7bf4a26b3f527aa768cefdd08366db4150db657fb872991a4ab6cd54d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e311a7d7bf4a26b3f527aa768cefdd08366db4150db657fb872991a4ab6cd54d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 16:43:53 compute-0 podman[262687]: 2025-12-09 16:43:53.217134482 +0000 UTC m=+0.169913239 container init f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 16:43:53 compute-0 podman[262687]: 2025-12-09 16:43:53.230970604 +0000 UTC m=+0.183749361 container start f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 16:43:53 compute-0 podman[262687]: 2025-12-09 16:43:53.234879505 +0000 UTC m=+0.187658262 container attach f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldstine, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 09 16:43:53 compute-0 lvm[262783]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:43:53 compute-0 lvm[262783]: VG ceph_vg1 finished
Dec 09 16:43:53 compute-0 lvm[262782]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:43:53 compute-0 lvm[262782]: VG ceph_vg0 finished
Dec 09 16:43:53 compute-0 lvm[262785]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:43:53 compute-0 lvm[262785]: VG ceph_vg2 finished
Dec 09 16:43:54 compute-0 ecstatic_goldstine[262704]: {}
Dec 09 16:43:54 compute-0 systemd[1]: libpod-f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701.scope: Deactivated successfully.
Dec 09 16:43:54 compute-0 podman[262687]: 2025-12-09 16:43:54.072847645 +0000 UTC m=+1.025626382 container died f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 09 16:43:54 compute-0 systemd[1]: libpod-f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701.scope: Consumed 1.321s CPU time.
Dec 09 16:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e311a7d7bf4a26b3f527aa768cefdd08366db4150db657fb872991a4ab6cd54d-merged.mount: Deactivated successfully.
Dec 09 16:43:54 compute-0 podman[262687]: 2025-12-09 16:43:54.120318301 +0000 UTC m=+1.073097038 container remove f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:43:54 compute-0 systemd[1]: libpod-conmon-f70b248c28c80787fcef6dd50e7c2036522011ca5122e5c21838cb4f3c187701.scope: Deactivated successfully.
Dec 09 16:43:54 compute-0 sudo[262609]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 16:43:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:43:54 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 16:43:54 compute-0 ceph-mon[75222]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:43:54 compute-0 sudo[262799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 16:43:54 compute-0 sudo[262799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 16:43:54 compute-0 sudo[262799]: pam_unix(sudo:session): session closed for user root
Dec 09 16:43:54 compute-0 ceph-mon[75222]: pgmap v1356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:43:54 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' 
Dec 09 16:43:54 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:56 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:43:56 compute-0 ceph-mon[75222]: pgmap v1357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:43:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:43:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:43:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:43:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:43:56 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:43:56 compute-0 podman[262825]: 2025-12-09 16:43:56.644287117 +0000 UTC m=+0.074156664 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 16:43:56 compute-0 podman[262824]: 2025-12-09 16:43:56.678050074 +0000 UTC m=+0.117224595 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 16:43:56 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:58 compute-0 ceph-mon[75222]: pgmap v1358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:58 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:43:59 compute-0 ceph-mon[75222]: pgmap v1359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:00 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:01 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:01 compute-0 ceph-mon[75222]: pgmap v1360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:02 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:04 compute-0 ceph-mon[75222]: pgmap v1361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:04 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:06 compute-0 ceph-mon[75222]: pgmap v1362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:06 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:06 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:08 compute-0 ceph-mon[75222]: pgmap v1363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.101 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.101 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.102 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.102 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.102 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:44:08 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:44:08 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413325419' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.689 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.856 243461 WARNING nova.virt.libvirt.driver [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.857 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5075MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.857 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:44:08 compute-0 nova_compute[243452]: 2025-12-09 16:44:08.858 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:44:08 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:09 compute-0 nova_compute[243452]: 2025-12-09 16:44:09.010 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 16:44:09 compute-0 nova_compute[243452]: 2025-12-09 16:44:09.010 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 16:44:09 compute-0 nova_compute[243452]: 2025-12-09 16:44:09.033 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 16:44:09 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2413325419' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:44:09 compute-0 sshd-session[262891]: Accepted publickey for zuul from 192.168.122.10 port 39876 ssh2: ECDSA SHA256:5Z3PQlHQTNxVypj+2lSB7x2k5BQcImpWN0ATZCDqSSQ
Dec 09 16:44:09 compute-0 systemd-logind[786]: New session 56 of user zuul.
Dec 09 16:44:09 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec 09 16:44:09 compute-0 sshd-session[262891]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 16:44:09 compute-0 sudo[262914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 09 16:44:09 compute-0 sudo[262914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 16:44:09 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 09 16:44:09 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/28396551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:44:09 compute-0 nova_compute[243452]: 2025-12-09 16:44:09.624 243461 DEBUG oslo_concurrency.processutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 16:44:09 compute-0 nova_compute[243452]: 2025-12-09 16:44:09.630 243461 DEBUG nova.compute.provider_tree [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed in ProviderTree for provider: ca130087-db63-46e1-b278-a80bb66e6865 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 16:44:09 compute-0 nova_compute[243452]: 2025-12-09 16:44:09.658 243461 DEBUG nova.scheduler.client.report [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Inventory has not changed for provider ca130087-db63-46e1-b278-a80bb66e6865 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 16:44:09 compute-0 nova_compute[243452]: 2025-12-09 16:44:09.660 243461 DEBUG nova.compute.resource_tracker [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 16:44:09 compute-0 nova_compute[243452]: 2025-12-09 16:44:09.661 243461 DEBUG oslo_concurrency.lockutils [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:44:10 compute-0 ceph-mon[75222]: pgmap v1364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:10 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/28396551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 09 16:44:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 09 16:44:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045483121' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:44:10 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 09 16:44:10 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045483121' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:44:10 compute-0 nova_compute[243452]: 2025-12-09 16:44:10.662 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:10 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3045483121' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 09 16:44:11 compute-0 ceph-mon[75222]: from='client.? 192.168.122.10:0/3045483121' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 09 16:44:11 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:11 compute-0 podman[263057]: 2025-12-09 16:44:11.638529296 +0000 UTC m=+0.081277855 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 09 16:44:12 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:12 compute-0 nova_compute[243452]: 2025-12-09 16:44:12.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:12 compute-0 nova_compute[243452]: 2025-12-09 16:44:12.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 16:44:12 compute-0 nova_compute[243452]: 2025-12-09 16:44:12.055 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 16:44:12 compute-0 nova_compute[243452]: 2025-12-09 16:44:12.070 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 16:44:12 compute-0 nova_compute[243452]: 2025-12-09 16:44:12.071 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:12 compute-0 nova_compute[243452]: 2025-12-09 16:44:12.071 243461 DEBUG nova.compute.manager [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 16:44:12 compute-0 ceph-mon[75222]: pgmap v1365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:12 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14606 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:12 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:13 compute-0 ceph-mon[75222]: from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:13 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 09 16:44:13 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396923838' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 09 16:44:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:13 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6805 writes, 26K keys, 6805 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6805 writes, 1459 syncs, 4.66 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 352 writes, 673 keys, 352 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 352 writes, 172 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:14 compute-0 nova_compute[243452]: 2025-12-09 16:44:14.064 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:14 compute-0 ceph-mon[75222]: from='client.14606 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:14 compute-0 ceph-mon[75222]: pgmap v1366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:14 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3396923838' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 09 16:44:14 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:15 compute-0 nova_compute[243452]: 2025-12-09 16:44:15.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:15 compute-0 nova_compute[243452]: 2025-12-09 16:44:15.055 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:16 compute-0 nova_compute[243452]: 2025-12-09 16:44:16.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:16 compute-0 nova_compute[243452]: 2025-12-09 16:44:16.054 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:16 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:16 compute-0 ceph-mon[75222]: pgmap v1367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:16 compute-0 sshd-session[263230]: Invalid user oracle from 146.190.31.45 port 40842
Dec 09 16:44:16 compute-0 sshd-session[263230]: Connection closed by invalid user oracle 146.190.31.45 port 40842 [preauth]
Dec 09 16:44:16 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:17 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 8336 writes, 32K keys, 8336 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8336 writes, 1976 syncs, 4.22 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 313 writes, 736 keys, 313 commit groups, 1.0 writes per commit group, ingest: 0.31 MB, 0.00 MB/s
                                           Interval WAL: 313 writes, 147 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:44:17.865 155091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 16:44:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:44:17.866 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 16:44:17 compute-0 ovn_metadata_agent[155086]: 2025-12-09 16:44:17.866 155091 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 16:44:18 compute-0 ovs-vsctl[263261]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 09 16:44:18 compute-0 ceph-mon[75222]: pgmap v1368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:18 compute-0 virtqemud[243015]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 09 16:44:18 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:18 compute-0 virtqemud[243015]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 09 16:44:18 compute-0 virtqemud[243015]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 09 16:44:19 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: cache status {prefix=cache status} (starting...)
Dec 09 16:44:19 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: client ls {prefix=client ls} (starting...)
Dec 09 16:44:19 compute-0 lvm[263584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 16:44:19 compute-0 lvm[263584]: VG ceph_vg0 finished
Dec 09 16:44:19 compute-0 lvm[263601]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 09 16:44:19 compute-0 lvm[263601]: VG ceph_vg2 finished
Dec 09 16:44:19 compute-0 lvm[263616]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 09 16:44:19 compute-0 lvm[263616]: VG ceph_vg1 finished
Dec 09 16:44:20 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14610 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:20 compute-0 ceph-mon[75222]: pgmap v1369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:20 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: damage ls {prefix=damage ls} (starting...)
Dec 09 16:44:20 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: dump loads {prefix=dump loads} (starting...)
Dec 09 16:44:20 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14612 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:20 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 09 16:44:20 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 09 16:44:20 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 09 16:44:20 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:20 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 09 16:44:21 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14614 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 09 16:44:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933369489' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 09 16:44:21 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 09 16:44:21 compute-0 ceph-mon[75222]: from='client.14610 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:21 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3933369489' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 09 16:44:21 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 09 16:44:21 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14618 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 16:44:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558319179' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:44:21 compute-0 ceph-mgr[75515]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 09 16:44:21 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: 2025-12-09T16:44:21.559+0000 7fc8ad494640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 09 16:44:21 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: ops {prefix=ops} (starting...)
Dec 09 16:44:21 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 09 16:44:21 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695289944' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 09 16:44:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:22 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 6853 writes, 27K keys, 6853 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6853 writes, 1449 syncs, 4.73 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 286 writes, 517 keys, 286 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 286 writes, 138 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:22 compute-0 nova_compute[243452]: 2025-12-09 16:44:22.047 243461 DEBUG oslo_service.periodic_task [None req-8ef5e864-f7dd-4e45-95e0-4532f2566626 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 16:44:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 09 16:44:22 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/227363500' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 09 16:44:22 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: session ls {prefix=session ls} (starting...)
Dec 09 16:44:22 compute-0 ceph-mon[75222]: from='client.14612 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:22 compute-0 ceph-mon[75222]: pgmap v1370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:22 compute-0 ceph-mon[75222]: from='client.14614 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:22 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1558319179' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 09 16:44:22 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1695289944' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 09 16:44:22 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/227363500' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 09 16:44:22 compute-0 ceph-mds[95396]: mds.cephfs.compute-0.izecis asok_command: status {prefix=status} (starting...)
Dec 09 16:44:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 09 16:44:22 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1378810301' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 09 16:44:22 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 09 16:44:22 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1102313007' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 09 16:44:22 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:22 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14632 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 09 16:44:23 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644178' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 09 16:44:23 compute-0 ceph-mon[75222]: from='client.14618 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:23 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1378810301' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 09 16:44:23 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1102313007' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 09 16:44:23 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2644178' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 09 16:44:23 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14634 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:23 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 09 16:44:23 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391814446' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 09 16:44:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 09 16:44:24 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1445722699' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 09 16:44:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 09 16:44:24 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4015403423' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 09 16:44:24 compute-0 ceph-mon[75222]: pgmap v1371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:24 compute-0 ceph-mon[75222]: from='client.14632 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:24 compute-0 ceph-mon[75222]: from='client.14634 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:24 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2391814446' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 09 16:44:24 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1445722699' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 09 16:44:24 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4015403423' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 09 16:44:24 compute-0 ceph-mgr[75515]: [devicehealth INFO root] Check health
Dec 09 16:44:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 09 16:44:24 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184795284' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 09 16:44:24 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 09 16:44:24 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169790592' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 09 16:44:24 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:25 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14648 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:25 compute-0 ceph-mgr[75515]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 09 16:44:25 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: 2025-12-09T16:44:25.076+0000 7fc8ad494640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 09 16:44:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 09 16:44:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616464711' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 09 16:44:25 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3184795284' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 09 16:44:25 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3169790592' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 09 16:44:25 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1616464711' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 09 16:44:25 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:25 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 09 16:44:25 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143821384' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 09 16:44:25 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14654 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:32.115552+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:33.115820+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:34.115974+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:35.116151+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:36.116324+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:37.116509+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:38.116645+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:39.116831+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 253952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:40.117056+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 253952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:41.117210+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:42.117393+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:43.117600+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:44.117775+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:45.117965+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:46.118231+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:47.118433+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 229376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:48.118622+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 229376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:49.118879+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 212992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:50.119181+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 212992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:51.119409+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 212992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:52.119640+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:53.119784+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:54.119960+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:55.120103+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:56.120278+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:57.120426+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:58.120567+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:59.120688+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:00.120871+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:01.121063+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:02.121212+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:03.121392+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 180224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:04.121519+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 180224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:05.121662+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:06.121888+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:07.122017+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:08.122170+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:09.122318+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:10.122505+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:11.122652+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:12.122823+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:13.123018+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:14.123205+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:15.123370+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:16.123559+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:17.123696+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:18.123869+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:19.124417+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:20.124647+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:21.124778+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:22.124943+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:23.125065+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:24.125189+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:25.125627+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:26.126003+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:27.126274+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:28.126397+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:29.126755+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:30.126911+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 73728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:31.127155+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 73728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:32.127373+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 73728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:33.127553+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 65536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:34.127742+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 65536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:35.127914+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 57344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:36.128199+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 57344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:37.128408+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 57344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:38.128542+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 49152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:39.128669+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 49152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:40.128831+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 49152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:41.128995+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 49152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:42.129336+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 40960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:43.129481+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 40960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:44.129787+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 32768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:45.129939+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 32768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:46.130106+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:47.130272+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:48.130398+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:49.130657+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:50.130803+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:51.130967+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:52.131114+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:53.131298+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:54.131434+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:55.131630+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:56.131806+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:57.131943+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:58.132204+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:59.132589+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:00.132773+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:01.132951+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:02.133113+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:03.133313+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:04.133475+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:05.133974+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:06.134245+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:07.134370+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:08.134572+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:09.135094+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:10.135270+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:11.135463+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:12.135597+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:13.135784+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:14.135912+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:15.136181+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:16.136367+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:17.136480+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:18.136621+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:19.136756+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:20.136971+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:21.137137+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:22.137298+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:23.137491+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:24.137680+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:25.137781+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:26.137985+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:27.138137+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 933888 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:28.138281+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 933888 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:29.138411+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 925696 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:30.138836+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 925696 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:31.138969+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 917504 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:32.139065+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 917504 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:33.139178+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 917504 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:34.139338+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 917504 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:35.139505+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 909312 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:36.139905+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 909312 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:37.140313+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 909312 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:38.140465+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 901120 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:39.140627+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 901120 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:40.140775+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 892928 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:41.141987+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 892928 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:42.142171+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 892928 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:43.142493+0000)
Dec 09 16:44:25 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 884736 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:25 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:25 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:25 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:44.142707+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 884736 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:45.143078+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 876544 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:46.143455+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 876544 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:47.143680+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 868352 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:48.143983+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 868352 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:49.144235+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 860160 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:50.144412+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 860160 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:51.144540+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 860160 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:52.144776+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 851968 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:53.144930+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 851968 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:54.145058+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 851968 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:55.145191+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 843776 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:56.145367+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 843776 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:57.145503+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 843776 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:58.145635+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 835584 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:59.145774+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 835584 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:00.145921+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 835584 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:01.146138+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 827392 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:02.146321+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 827392 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:03.146406+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 819200 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:04.146524+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 819200 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:05.146660+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 819200 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:06.146839+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 811008 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:07.146979+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 811008 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:08.147130+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 794624 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:09.147276+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 794624 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:10.147483+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 794624 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:11.147644+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 786432 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:12.147783+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 786432 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:13.147936+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 778240 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:14.148083+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 778240 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:15.148195+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 778240 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:16.148341+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:17.148458+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:18.148598+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:19.148734+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 761856 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:20.148896+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 761856 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:21.149038+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 761856 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:22.149222+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 753664 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:23.149379+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 753664 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:24.149485+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 753664 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:25.149653+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 745472 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:26.149864+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 745472 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:27.149964+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 737280 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:28.150130+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 737280 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:29.150253+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 737280 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:30.150430+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 729088 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:31.150562+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 729088 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:32.150665+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:33.150776+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:34.150886+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:35.150993+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:36.151124+0000)
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [balancer INFO root] Optimize plan auto_2025-12-09_16:44:26
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [balancer INFO root] do_upmap
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'images', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control']
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:37.151242+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:38.151365+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:39.151474+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:40.151588+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:41.151738+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:42.151861+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:43.152011+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 696320 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:44.152140+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 696320 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:45.152254+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 696320 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:46.152454+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 688128 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:47.152589+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 688128 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:48.152764+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 679936 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:49.152899+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 679936 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:50.153077+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 679936 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:51.153271+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5500 writes, 23K keys, 5500 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5500 writes, 820 syncs, 6.71 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5500 writes, 23K keys, 5500 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s
                                           Interval WAL: 5500 writes, 820 syncs, 6.71 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 614400 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:52.153446+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 614400 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:53.153566+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 606208 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:54.153677+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 606208 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:55.153796+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 606208 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:56.153973+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 589824 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:57.154108+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 589824 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:58.154228+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 581632 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:59.154440+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 581632 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:00.154563+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 581632 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:01.154742+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 573440 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:02.154930+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 573440 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:03.155085+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 557056 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:04.155217+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 557056 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:05.155364+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 557056 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:06.155566+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 548864 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:07.155705+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 548864 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:08.155865+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 548864 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:09.156034+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 540672 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:10.156204+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 540672 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:11.156391+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 532480 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:12.156547+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 532480 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:13.156713+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 532480 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:14.156878+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 524288 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:15.157015+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 524288 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:16.157205+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 516096 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:17.157338+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 516096 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:18.157479+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 516096 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:19.157606+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 507904 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:20.157812+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 507904 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:21.157972+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 499712 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:22.158143+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 491520 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:23.158324+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 491520 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:24.158439+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 483328 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:25.158554+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:26.158682+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 483328 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:27.158779+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 475136 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:28.158917+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 483328 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:29.159046+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 483328 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:30.159188+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 475136 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:31.159321+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 475136 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:32.159451+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 475136 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:33.159582+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 466944 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:34.159711+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 466944 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:35.159894+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 466944 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:36.160068+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 458752 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:37.160186+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 458752 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:38.160391+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 450560 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:39.160579+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 450560 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:40.160811+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 442368 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:41.161025+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 442368 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:42.161203+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 442368 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:43.161369+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 434176 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:44.161540+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 434176 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:45.161818+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 434176 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 325.206054688s of 325.221221924s, submitted: 8
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:46.161940+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 434176 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:47.162045+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:48.162204+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:49.162367+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:50.162512+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:51.162678+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:52.162849+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:53.163186+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:54.163356+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:55.163584+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 1032192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:56.163841+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 1032192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:57.164001+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:58.164325+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:59.164459+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:00.164590+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1007616 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:01.164757+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1007616 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:02.164915+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:03.165109+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:04.165282+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 991232 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:05.165455+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:06.165680+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:07.165840+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:08.165995+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:09.166170+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:10.166315+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 942080 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:11.166661+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 942080 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:12.167081+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:13.167243+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:14.167920+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:15.168285+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:16.168652+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:17.168885+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 917504 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:18.169019+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 917504 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:19.169230+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 909312 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:20.169345+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 909312 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:21.169463+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:22.169577+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:23.169698+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:24.169794+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:25.169945+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:26.170162+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 884736 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:27.170310+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 884736 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:28.170459+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 860160 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:29.170597+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 860160 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:30.170766+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 860160 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:31.170891+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 851968 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:32.171032+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 851968 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:33.171177+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 835584 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:34.171319+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 827392 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:35.171432+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 827392 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:36.171594+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:37.171805+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:38.171958+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:39.172081+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:40.172189+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 811008 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:41.172334+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 811008 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:42.172472+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 811008 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:43.172608+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 802816 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:44.172783+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 794624 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:45.172912+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 786432 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:46.173086+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 786432 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:47.173284+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 778240 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:48.173433+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 778240 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:49.173577+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 778240 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:50.173698+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:51.173808+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:52.174026+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:53.174174+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:54.174378+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:55.174520+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:56.174756+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:57.174901+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:58.175062+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:59.175201+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:00.175392+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:01.175524+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:02.175645+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:03.175837+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:04.175976+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:05.176100+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:06.176265+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:07.176442+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:08.176588+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:09.176795+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:10.176962+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:11.177091+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:12.177237+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:13.177352+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:14.177476+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:15.177601+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:16.177830+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:17.177964+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:18.178104+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:19.178259+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:20.178386+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:21.178534+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:22.178678+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:23.178815+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:24.178946+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:25.179067+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:26.179230+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:27.179363+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:28.179895+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:29.180046+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:30.180195+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:31.180330+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:32.180463+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:33.180579+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 712704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:34.180701+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:35.180821+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:36.181005+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:37.181202+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:38.181387+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:39.181529+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:40.181691+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:41.181846+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:42.181993+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:43.182125+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:44.182268+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:45.182415+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:46.182617+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:47.183531+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:48.184024+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:49.184162+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:50.184310+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:51.184482+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:52.185271+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:53.185599+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:54.185792+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:55.185967+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:56.186168+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:57.186539+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:58.186688+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:59.186807+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:00.186924+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:01.187076+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:02.187456+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:03.187611+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:04.187869+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:05.188046+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:06.188218+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:07.188357+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:08.188543+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:09.188688+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:10.188848+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:11.188974+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:12.189117+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:13.189255+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:14.189504+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:15.189658+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:16.189800+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:17.189957+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:18.190109+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:19.190253+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:20.190411+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:21.190572+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:22.192798+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:23.195134+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:24.196196+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:25.196323+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:26.196533+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:27.197902+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:28.198027+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:29.198769+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:30.198950+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:31.199107+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:32.199236+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:33.199368+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:34.199539+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:35.199655+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:36.199810+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:37.199989+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:38.200119+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:39.200235+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:40.200440+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:41.200577+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:42.200756+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:43.200903+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:44.201099+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:45.201225+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:46.201402+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:47.201568+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:48.201825+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:49.201969+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:50.202123+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:51.202285+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:52.202454+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:53.202619+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:54.202769+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:55.202900+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:56.203243+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:57.203379+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:58.203535+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:59.203694+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:00.203906+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:01.204066+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:02.204201+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:03.204350+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:04.204461+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:05.204636+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:06.204800+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:07.204942+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:08.205113+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:09.205309+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:10.205492+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:11.205672+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:12.205801+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:13.205907+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:14.206074+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:15.206306+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:16.206603+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:17.206807+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:18.206954+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:19.207105+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:20.207236+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:21.207366+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:22.207500+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:23.207655+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:24.207788+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:25.207921+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:26.208049+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:27.208148+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:28.208330+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:29.208484+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:30.208647+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:31.208813+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:32.208968+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:33.209144+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:34.209302+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:35.209452+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:36.209582+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:37.209693+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:38.209881+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:39.210029+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:40.210162+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:41.210322+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:42.210453+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:43.210577+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:44.210686+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:45.210837+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:46.211025+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:47.211155+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:48.211301+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:49.211449+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:50.211582+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:51.211775+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:52.211912+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:53.212117+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:54.212299+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:55.212469+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:56.212626+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:57.212766+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:58.212924+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:59.213075+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:00.213209+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:01.213354+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:02.213490+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:03.213596+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:04.213742+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:05.213881+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:06.214040+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:07.214184+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:08.214327+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:09.214455+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:10.214597+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:11.214771+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:12.214895+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:13.215020+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:14.215132+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:15.215454+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:16.215638+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:17.215791+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:18.215925+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:19.216076+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:20.216212+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:21.216329+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:22.216467+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:23.216632+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:24.216813+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:25.216947+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:26.217120+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:27.217281+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:28.217406+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:29.217601+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:30.217782+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:31.217930+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:32.218099+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:33.218260+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:34.218403+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:35.218539+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:36.218752+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:37.218872+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:38.219042+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:39.219167+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:40.219287+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:41.219463+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:42.219617+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:43.219787+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:44.219915+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:45.220096+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953706 data_alloc: 218103808 data_used: 8038
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.955017090s of 300.101013184s, submitted: 90
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f21a75c000
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:46.220274+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:47.220363+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:48.220491+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:49.220610+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:50.220752+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:51.220920+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:52.221069+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:53.221201+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:54.221347+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:55.221472+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:56.221678+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:57.221855+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:58.222441+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:59.222660+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:00.222886+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:01.223180+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:02.223366+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:03.223579+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:04.223782+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:05.223921+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:06.224098+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:07.224256+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:08.224518+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:09.224711+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:10.224897+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:11.225105+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:12.225255+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:13.225380+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:14.225534+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:15.225662+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:16.225804+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:17.225988+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:18.226141+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:19.226304+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:20.226456+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:21.226616+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:22.226791+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:23.226973+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:24.227155+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:25.227514+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:26.227846+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:27.228028+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:28.228204+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:29.228509+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:30.228858+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:31.229106+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:32.229318+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:33.229460+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:34.229599+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:35.229776+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 147456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:36.229955+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 147456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:37.230078+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 147456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:38.230257+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:39.230397+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:40.230528+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:41.230663+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:42.230826+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:43.230955+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:44.231081+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:45.231276+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:46.231514+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:47.231646+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:48.231797+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:49.231972+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:50.232093+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:51.232231+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 114688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:52.232369+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 114688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:53.232455+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 114688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:54.232573+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:55.232692+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:56.232848+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:57.232961+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:58.233106+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:59.233244+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:00.233429+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:01.233532+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:02.233643+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:03.233781+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:04.233914+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:05.234635+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:06.234788+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:07.235002+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:08.235175+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:09.235351+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:10.235488+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:11.235622+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:12.235953+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:13.236317+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:14.236511+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:15.236647+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:16.236783+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:17.236901+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:18.237027+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:19.237193+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:20.237356+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:21.237601+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:22.237912+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:23.238052+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:24.238174+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:25.238398+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:26.238646+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:27.238829+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:28.239059+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:29.239277+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:30.239422+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:31.239649+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:32.239862+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:33.240154+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:34.240390+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:35.240547+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:36.240777+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:37.240932+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:38.241065+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:39.241181+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:40.241318+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:41.241453+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:42.241582+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:43.241772+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:44.241959+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:45.242080+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:46.242262+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:47.242403+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:48.242642+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 49152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:49.242759+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 49152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:50.242871+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 49152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:51.243320+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 49152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:52.243490+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 49152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:53.243598+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:54.243732+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:55.243864+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:56.244049+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:57.244187+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:58.244318+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:59.244450+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:00.244571+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:01.244703+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:02.244869+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:03.245024+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:04.245187+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:05.245371+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:06.245611+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:07.245819+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:08.246356+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:09.246571+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:10.247440+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:11.247784+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:12.248375+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:13.248813+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:14.249200+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:15.249533+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:16.249690+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:17.249914+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:18.250181+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:19.250449+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:20.250652+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:21.250860+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:22.251018+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:23.251173+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 24576 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:24.251316+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:25.251470+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:26.251696+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:27.251868+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:28.252047+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:29.252178+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:30.252352+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:31.252499+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:32.252712+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:33.252944+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:34.253133+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:35.253370+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:36.253633+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:37.253810+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:38.254078+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:39.254318+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:40.254613+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:41.254776+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:42.254948+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:43.255174+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:44.255342+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:45.255522+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:46.255808+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:47.255989+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:48.256177+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:49.256307+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:50.256471+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:51.256623+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:52.256952+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:53.257061+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:54.257181+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:55.257283+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:56.257471+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 0 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:57.257701+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 0 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:58.257887+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:59.258052+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:00.258233+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:01.258397+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:02.258572+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:03.258794+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:04.258947+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:05.259116+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:06.259383+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:07.259504+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:08.259643+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:09.259786+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:10.259950+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:11.260269+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:12.260398+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:13.260568+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:14.260874+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:15.261043+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:16.261413+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:17.262216+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:18.262498+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:19.262833+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:20.263040+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:21.263859+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:22.264169+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:23.264354+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:24.264499+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:25.264659+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:26.264869+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:27.265372+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:28.265502+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:29.265932+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:30.266307+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:31.266443+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:32.266639+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:33.266788+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:34.266905+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:35.267039+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:36.267291+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:37.267452+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:38.267575+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:39.267751+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:40.267977+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:41.268196+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:42.268415+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:43.268540+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1007616 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:44.268833+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:45.269060+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:46.269353+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:47.269523+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:48.269660+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:49.269846+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:50.270010+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:51.270156+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5728 writes, 24K keys, 5728 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5728 writes, 934 syncs, 6.13 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f216be98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:52.270328+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:53.270476+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:54.270588+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:55.270712+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:56.270885+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:57.271031+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:58.271148+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:59.271255+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:00.271415+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:01.271573+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:02.271682+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:03.271801+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:04.271912+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:05.272131+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:06.272304+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:07.272490+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:08.272603+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:09.272771+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:10.272989+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:11.273131+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:12.273270+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:13.273396+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:14.273528+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:15.273648+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:16.273805+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:17.273956+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:18.274077+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:19.279130+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:20.283403+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:21.286329+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:22.286705+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:23.287249+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:24.287866+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:25.289531+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:26.289830+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:27.291354+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:28.291688+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:29.291882+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:30.292661+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:31.293027+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:32.293304+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:33.293557+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:34.293765+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:35.293919+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:36.294198+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:37.294340+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:38.294526+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:39.294838+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:40.294994+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:41.295230+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:42.295389+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:43.295612+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:44.295794+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:45.296031+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.950469971s of 299.984924316s, submitted: 24
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:46.296276+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:47.296419+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:48.296531+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 729088 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:49.296699+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:50.296972+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:51.297149+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:52.297331+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:53.297522+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:54.297697+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:55.297966+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:56.298205+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:57.298363+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:58.298486+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:59.298624+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:00.298782+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:01.298890+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:02.299015+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:03.299141+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:04.299303+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:05.299462+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:06.299596+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:07.299813+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:08.299961+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:09.300126+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:10.300353+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:11.300492+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:12.300835+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:13.587092+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:14.587193+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:15.587350+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:16.587572+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:17.587849+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:18.588045+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:19.588226+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:20.588385+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:21.588542+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:22.588699+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:23.588906+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:24.589053+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:25.589218+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:26.589446+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:27.589573+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:28.589764+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:29.589903+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:30.590046+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:31.590177+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:32.590523+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:33.590792+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:34.591021+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:35.591193+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:36.591414+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:37.591591+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:38.591900+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:39.592043+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:40.592188+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:41.592405+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:42.592583+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:43.592763+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:44.592907+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:45.593040+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:46.593192+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:47.593339+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:48.593644+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:49.593858+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:50.594013+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:51.594195+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:52.594392+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 704512 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:53.594566+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:54.594774+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:55.594930+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:56.595109+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:57.595251+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:58.595328+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:59.595535+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:00.595715+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:01.595959+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:02.596190+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:03.596391+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:04.596581+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:05.596863+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:06.597150+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:07.597323+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:08.597497+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:09.597824+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:10.598030+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:11.598224+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:12.598402+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:13.598605+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:14.598837+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:15.598987+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:16.599147+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:17.599333+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:18.599502+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:19.599757+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:20.599933+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:21.600099+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:22.600268+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:23.600387+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:24.600603+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:25.600805+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:26.601095+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:27.601309+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:28.601467+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:29.601866+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:30.602163+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:31.602390+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:32.602690+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:33.603061+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:34.603411+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:35.603690+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:36.604023+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:37.604295+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:38.604467+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:39.605161+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:40.605404+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:41.605592+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:42.605876+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:43.606006+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:44.606210+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:45.606351+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:46.606576+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:47.606788+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:48.606942+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:49.607131+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:50.607324+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:51.607478+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:52.607765+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:53.607896+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:54.608020+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:55.608212+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:56.608417+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:57.608590+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:58.608805+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:59.609024+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:00.609177+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:01.609427+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:02.609664+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:03.609776+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:04.610014+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:05.610172+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:06.610390+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:07.610589+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:08.610811+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:09.610967+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:10.611121+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:11.611277+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:12.611511+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:13.611777+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:14.611943+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:15.612158+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:16.612401+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:17.612587+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:18.612786+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:19.612930+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:20.613126+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:21.613279+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:22.613405+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:23.613538+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:24.613643+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:25.613826+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:26.614086+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:27.614258+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:28.614444+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:29.614696+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:30.614888+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:31.615015+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:32.615150+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:33.615357+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:34.615492+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:35.615620+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 679936 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:36.615807+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:37.615944+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:38.616083+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:39.616215+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:40.616288+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:41.616449+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:42.616604+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:43.616781+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:44.616980+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:45.617113+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:46.617242+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:47.617393+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:48.617561+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:49.617689+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:50.617902+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:51.618023+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:52.618170+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:53.618300+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:54.618426+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:55.618545+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:56.618707+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:57.618892+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:58.619055+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:59.619204+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:00.619329+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:01.619471+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:02.619601+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:03.619771+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:04.619925+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:05.620150+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:06.620344+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:07.620518+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:08.620678+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:09.620848+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:10.621013+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:11.621144+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:12.621368+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:13.621531+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:14.621707+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:15.621920+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:16.622088+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:17.622254+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:18.622418+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:19.622633+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:20.622817+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:21.622966+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 671744 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:22.623126+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:23.623287+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:24.623488+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:25.623651+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:26.623824+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:27.624067+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:28.624242+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:29.624490+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:30.624844+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:31.625023+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:32.625165+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:33.625338+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:34.625485+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:35.625653+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:36.626925+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:37.627096+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:38.627245+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:39.627436+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:40.627632+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:41.627780+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:42.627965+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:43.628082+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:44.628195+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:45.628366+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:46.628538+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:47.628685+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:48.628837+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:49.629007+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:50.629160+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:51.629318+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:52.629485+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:53.629656+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 663552 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:54.629864+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:55.630003+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:56.630185+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:57.630349+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:58.630547+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:59.630777+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:00.630900+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:01.631024+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:02.631191+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:03.631361+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:04.631467+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:05.631590+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:06.631839+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:07.632015+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:08.632168+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:09.632390+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:10.632575+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:11.632750+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:12.633009+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:13.633136+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:14.633325+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:15.633506+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:16.633772+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:17.633963+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:18.634095+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:19.634281+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:20.634420+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:21.634571+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:22.634758+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:23.634917+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:24.635045+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:25.635169+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:26.635408+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:27.635586+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:28.635863+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:29.636041+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:30.636218+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:31.636449+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:32.636764+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955242 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:33.636871+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:34.636999+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:35.637199+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f21a75d800
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:36.637478+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 290.004302979s of 290.769348145s, submitted: 90
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 507904 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fcea2000/0x0/0x4ffc00000, data 0xc27c1/0x18a000, compress 0x0/0x0/0x0, omap 0x112ad, meta 0x2bbed53), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:37.637670+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 475136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994263 data_alloc: 218103808 data_used: 9876
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:38.637834+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 133 ms_handle_reset con 0x55f21a75d800 session 0x55f21ab96fc0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18309120 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:39.638064+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f21a144400
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc694000/0x0/0x4ffc00000, data 0x8c7b56/0x996000, compress 0x0/0x0/0x0, omap 0x11bc8, meta 0x2bbe438), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18292736 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:40.638235+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 134 ms_handle_reset con 0x55f21a144400 session 0x55f21acaa8c0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 18071552 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:41.638411+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 18071552 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:42.638594+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 18071552 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057696 data_alloc: 218103808 data_used: 10489
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe8f000/0x0/0x4ffc00000, data 0x10c9713/0x1199000, compress 0x0/0x0/0x0, omap 0x11e53, meta 0x2bbe1ad), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:43.638827+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe8f000/0x0/0x4ffc00000, data 0x10c9713/0x1199000, compress 0x0/0x0/0x0, omap 0x11e53, meta 0x2bbe1ad), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 18055168 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:44.639009+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 18055168 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:45.639179+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 18046976 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:46.639347+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 18046976 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:47.639534+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 18046976 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059414 data_alloc: 218103808 data_used: 10489
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:48.639694+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe8e000/0x0/0x4ffc00000, data 0x10cb192/0x119c000, compress 0x0/0x0/0x0, omap 0x1212b, meta 0x2bbded5), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 18046976 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f21a145400
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.136947632s of 12.308998108s, submitted: 57
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:49.639872+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 136 ms_handle_reset con 0x55f21a145400 session 0x55f21ab97dc0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 17858560 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:50.640097+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 17858560 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:51.640338+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 17858560 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f217edf400
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:52.640522+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 137 ms_handle_reset con 0x55f217edf400 session 0x55f21abb16c0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 17612800 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982925 data_alloc: 218103808 data_used: 14550
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:53.640658+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 17596416 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8a000/0x0/0x4ffc00000, data 0xce91c/0x19f000, compress 0x0/0x0/0x0, omap 0x1267d, meta 0x2bbd983), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:54.640852+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 17596416 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:55.641031+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 17596416 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:56.641293+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd03b7/0x1a2000, compress 0x0/0x0/0x0, omap 0x1298c, meta 0x2bbd674), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 17563648 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:57.641443+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 17563648 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985747 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:58.641646+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 17555456 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:59.641873+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 17539072 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:00.642029+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 17539072 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:01.642142+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 17539072 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd03b7/0x1a2000, compress 0x0/0x0/0x0, omap 0x1298c, meta 0x2bbd674), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:02.642349+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 17539072 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985747 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:03.642524+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 17539072 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:04.642675+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f217edf800
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.627066612s of 15.738327026s, submitted: 74
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 17367040 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:05.642828+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 139 ms_handle_reset con 0x55f217edf800 session 0x55f21ac09500
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:06.642999+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:07.643162+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0xd1fbe/0x1a7000, compress 0x0/0x0/0x0, omap 0x12df3, meta 0x2bbd20d), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991855 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0xd1fbe/0x1a7000, compress 0x0/0x0/0x0, omap 0x12df3, meta 0x2bbd20d), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:08.643279+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:09.643445+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:10.643601+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0xd1fbe/0x1a7000, compress 0x0/0x0/0x0, omap 0x12df3, meta 0x2bbd20d), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:11.643820+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:12.643963+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991855 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:13.644086+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:14.644307+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 17358848 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:15.644476+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f2190bc000
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.038352966s of 11.089797974s, submitted: 17
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0xd1fbe/0x1a7000, compress 0x0/0x0/0x0, omap 0x12df3, meta 0x2bbd20d), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 17219584 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:16.644674+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 17211392 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 140 ms_handle_reset con 0x55f2190bc000 session 0x55f219330700
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:17.644804+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991556 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:18.644924+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:19.645107+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:20.645236+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd3b7b/0x1a8000, compress 0x0/0x0/0x0, omap 0x130ee, meta 0x2bbcf12), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:21.645376+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:22.645555+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991556 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:23.645703+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd3b7b/0x1a8000, compress 0x0/0x0/0x0, omap 0x130ee, meta 0x2bbcf12), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:24.645892+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:25.646037+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:26.646265+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:27.646442+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:28.646583+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:29.646795+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:30.646975+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:31.647155+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:32.647326+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:33.647502+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:34.647632+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:35.647797+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:36.647971+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:37.648133+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:38.648396+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:39.648523+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:40.648691+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:41.648863+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:42.649064+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:43.649208+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:44.649371+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:45.649553+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:46.649793+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:47.649935+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:48.650070+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:49.650199+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:50.650354+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:51.650488+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:52.650683+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:53.650811+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:54.650935+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:55.651056+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:56.651205+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:57.651437+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:58.651588+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:59.651816+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:00.652026+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:01.652186+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:02.652345+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:03.652566+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:04.652706+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:05.652888+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:06.653063+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:07.653233+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:08.653374+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:09.653542+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:10.653810+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:11.653976+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:12.654119+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:13.654239+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:14.654375+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:15.654548+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:16.654739+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 16162816 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:17.654906+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:18.655019+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:19.655158+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:20.655354+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:21.655518+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:22.655691+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:23.655894+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:24.656085+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:25.656178+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:26.656420+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:27.656610+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:28.656768+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:29.656931+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:30.657072+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:31.657219+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:32.657362+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:33.657503+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:34.657620+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:35.657786+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:36.658030+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:37.658177+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:38.658298+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:39.658430+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:40.658565+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:41.658712+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:42.658943+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:43.659150+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:44.659295+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:45.659417+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:46.659577+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:47.659676+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:48.659810+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:49.659934+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:50.660413+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:51.660743+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:52.660890+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:53.661264+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:54.662268+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:55.662924+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:56.663515+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:57.663695+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:58.664086+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:59.664797+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:00.665254+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:01.665683+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:02.666079+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:03.666371+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:04.666642+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:05.666802+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:06.667024+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:07.667152+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 16154624 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:08.667412+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 16146432 heap: 93241344 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd55fa/0x1ab000, compress 0x0/0x0/0x0, omap 0x133fd, meta 0x2bbcc03), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995050 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f217edf400
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 112.787109375s of 112.877883911s, submitted: 31
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:09.667641+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 24444928 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 141 handle_osd_map epochs [141,142], i have 142, src has [1,142]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 142 ms_handle_reset con 0x55f217edf400 session 0x55f21a4a1340
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f217edf800
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:10.667851+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:11.668080+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 23339008 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 143 ms_handle_reset con 0x55f217edf800 session 0x55f21ab67dc0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:12.668287+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 23298048 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb207000/0x0/0x4ffc00000, data 0x1d48d51/0x1e23000, compress 0x0/0x0/0x0, omap 0x14158, meta 0x2bbbea8), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:13.668492+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 23298048 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155334 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb207000/0x0/0x4ffc00000, data 0x1d48d51/0x1e23000, compress 0x0/0x0/0x0, omap 0x14158, meta 0x2bbbea8), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:14.668675+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 23298048 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb207000/0x0/0x4ffc00000, data 0x1d48d51/0x1e23000, compress 0x0/0x0/0x0, omap 0x14158, meta 0x2bbbea8), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:15.668865+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 23298048 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:16.669061+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 23298048 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:17.669210+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 23298048 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:18.669399+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 23298048 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155334 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:19.669582+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 23298048 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb207000/0x0/0x4ffc00000, data 0x1d48d51/0x1e23000, compress 0x0/0x0/0x0, omap 0x14158, meta 0x2bbbea8), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f21a145400
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.517296791s of 11.683993340s, submitted: 35
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:20.669744+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 23175168 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 144 ms_handle_reset con 0x55f21a145400 session 0x55f218813c00
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:21.669881+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 22085632 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:22.670473+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 22085632 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fc205000/0x0/0x4ffc00000, data 0xd4a932/0xe25000, compress 0x0/0x0/0x0, omap 0x1480a, meta 0x2bbb7f6), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:23.670613+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 22085632 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f2190bc800
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075998 data_alloc: 218103808 data_used: 19240
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 145 ms_handle_reset con 0x55f2190bc800 session 0x55f21abb0380
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:24.670776+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:25.670910+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:26.671096+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc512/0x1b7000, compress 0x0/0x0/0x0, omap 0x14f24, meta 0x2bbb0dc), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:27.671284+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:28.671431+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1015179 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:29.671590+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:30.671829+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.047216415s of 10.180684090s, submitted: 82
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc512/0x1b7000, compress 0x0/0x0/0x0, omap 0x14f24, meta 0x2bbb0dc), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:31.672022+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xddfad/0x1ba000, compress 0x0/0x0/0x0, omap 0x15218, meta 0x2bbade8), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:32.672223+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21045248 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:33.672420+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xddfad/0x1ba000, compress 0x0/0x0/0x0, omap 0x15218, meta 0x2bbade8), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 21037056 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017953 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:34.672572+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 21037056 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f21b102800
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:35.672785+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 21004288 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 147 ms_handle_reset con 0x55f21b102800 session 0x55f21a62ddc0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:36.672954+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:37.673226+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fce6b000/0x0/0x4ffc00000, data 0xdfbc7/0x1bf000, compress 0x0/0x0/0x0, omap 0x155b8, meta 0x2bbaa48), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:38.673573+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024188 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:39.673798+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:40.673994+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:41.674159+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fce6b000/0x0/0x4ffc00000, data 0xdfbc7/0x1bf000, compress 0x0/0x0/0x0, omap 0x155b8, meta 0x2bbaa48), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:42.674357+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fce6b000/0x0/0x4ffc00000, data 0xdfbc7/0x1bf000, compress 0x0/0x0/0x0, omap 0x155b8, meta 0x2bbaa48), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:43.674536+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024188 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:44.674756+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:45.674922+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 19972096 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f217edf400
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.889292717s of 15.119859695s, submitted: 35
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 147 handle_osd_map epochs [147,148], i have 148, src has [1,148]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 148 ms_handle_reset con 0x55f217edf400 session 0x55f21a631a40
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe1794/0x1c1000, compress 0x0/0x0/0x0, omap 0x158b5, meta 0x2bba74b), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:46.675116+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:47.675393+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:48.675546+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025656 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:49.675821+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:50.676024+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:51.676227+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe1771/0x1c0000, compress 0x0/0x0/0x0, omap 0x158b5, meta 0x2bba74b), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:52.676415+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:53.676623+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025656 data_alloc: 218103808 data_used: 19224
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:54.676801+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 19955712 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:55.676946+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 148 handle_osd_map epochs [148,149], i have 149, src has [1,149]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe1771/0x1c0000, compress 0x0/0x0/0x0, omap 0x158b5, meta 0x2bba74b), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:56.677119+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:57.677261+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:58.677483+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:59.677802+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:00.678029+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:01.678212+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:02.678445+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:03.678773+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:04.678961+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:05.679142+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:06.679452+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:07.679670+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:08.679919+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:09.680131+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:10.680389+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:11.680547+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:12.680786+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:13.680941+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:14.681122+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:15.681421+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:16.681638+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:17.681806+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 20062208 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:18.681953+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:19.682151+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:20.682405+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:21.682654+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:22.682783+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:23.683006+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:24.683249+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:25.683399+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:26.683552+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:27.683702+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:28.683890+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:29.684012+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:30.684145+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:31.684317+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:32.684424+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:33.684664+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:34.684773+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:35.684872+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:36.685090+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:37.685228+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 20045824 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:38.685402+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:39.685538+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:40.685646+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:41.685777+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:42.685913+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:43.686163+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:44.686322+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:45.686440+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:46.686610+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:47.686786+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:48.686962+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:49.687124+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:50.687301+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:51.687470+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6567 writes, 26K keys, 6567 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6567 writes, 1311 syncs, 5.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 839 writes, 2350 keys, 839 commit groups, 1.0 writes per commit group, ingest: 1.16 MB, 0.00 MB/s
                                           Interval WAL: 839 writes, 377 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:52.687638+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:53.687848+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:54.688013+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:55.688146+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:56.688345+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 20029440 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:57.688551+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: mgrc ms_handle_reset ms_handle_reset con 0x55f218870000
Dec 09 16:44:26 compute-0 ceph-osd[88099]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/740356566
Dec 09 16:44:26 compute-0 ceph-osd[88099]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/740356566,v1:192.168.122.100:6801/740356566]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: get_auth_request con 0x55f21a145000 auth_method 0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: mgrc handle_mgr_configure stats_period=5
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:58.688812+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:59.688962+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:00.689096+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:01.689232+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:02.689413+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:03.689588+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:04.689800+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:05.689949+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:06.690148+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:07.690304+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:08.690468+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:09.690597+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:10.690733+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:11.690894+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:12.691064+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:13.691235+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:14.691394+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:15.691531+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:16.691698+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:17.691838+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:18.692004+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:19.692152+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:20.692278+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:21.692473+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:22.692663+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:23.692831+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:24.692998+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:25.693188+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:26.693399+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:27.693535+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:28.693701+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:29.693973+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:30.694100+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:31.694283+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:32.694896+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:33.695038+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:34.695178+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:35.695298+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:36.695446+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:37.695568+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:38.695855+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:39.696174+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:40.696395+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:41.696545+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:42.696838+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:43.697069+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028350 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:44.697311+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 19791872 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:45.697538+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 120.252685547s of 120.305191040s, submitted: 41
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:46.697807+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:47.698022+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:48.698209+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:49.698377+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:50.698616+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:51.698798+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:52.698989+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:53.699147+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:54.699379+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:55.699586+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:56.699770+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:57.699935+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:58.700187+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:59.700395+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:00.700608+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:01.700822+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:02.701041+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:03.701223+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:04.701395+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:05.701574+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:06.701780+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:07.701942+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:08.702114+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:09.702261+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:10.702419+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:11.702625+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:12.702825+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:13.702981+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:14.703228+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:15.703443+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:16.703636+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:17.703905+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:18.704110+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:19.704284+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:20.704474+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:21.704662+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:22.704871+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:23.705094+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:24.705248+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:25.705463+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:26.705661+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:27.705864+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:28.706060+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:29.706224+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:30.706403+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:31.706580+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:32.706798+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:33.707034+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:34.707228+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:35.707429+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:36.707628+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:37.707801+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:38.707928+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:39.708061+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:40.708187+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:41.708319+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:42.708478+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:43.708634+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:44.708812+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:45.709035+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:46.709341+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:47.709549+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:48.709753+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:49.710006+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:50.710188+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:51.710432+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:52.710570+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:53.710867+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:54.711050+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:55.711278+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:56.711514+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:57.711656+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:58.711799+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:59.712008+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:00.712141+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:01.712327+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:02.712567+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:03.712826+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:04.713006+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:05.713142+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:06.713366+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:07.713549+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:08.713796+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:09.713916+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:10.714039+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:11.714176+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:12.714345+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:14.207547+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:15.207681+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:16.207832+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:17.207977+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:18.208101+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:19.208238+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:20.208389+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:21.208519+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:22.208683+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:23.208868+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:24.209000+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:25.209111+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:26.209229+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:27.209406+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:28.209545+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:29.209669+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:30.209781+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:31.209893+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:32.210040+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:33.210209+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:34.210347+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:35.210516+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:36.210653+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:37.210810+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:38.210935+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:39.211053+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:40.211217+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:41.211365+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:42.211569+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:43.211752+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:44.212042+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 19783680 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:45.212204+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:46.212902+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:47.213109+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:48.213239+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:49.213366+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:50.213557+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:51.213779+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:52.213915+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:53.214050+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:54.214229+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:55.214346+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 19775488 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:56.214506+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 09 16:44:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730083786' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:57.215119+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:58.215499+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:59.215622+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:00.215798+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:01.215947+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:02.216115+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:03.216278+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:04.216451+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:05.216632+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19767296 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:06.216771+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:07.216952+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:08.217131+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:09.217293+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:10.217500+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:11.217796+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:12.217972+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:13.218123+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:14.218260+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:15.218450+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:16.218617+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:17.218803+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:18.218936+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:19.219082+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:20.219216+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:21.219419+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:22.219620+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:23.219797+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:24.219944+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:25.220088+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:26.220197+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:27.220445+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:28.220606+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:29.220770+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:30.220938+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:31.221146+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:32.221287+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:33.221413+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:34.221548+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:35.221685+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:36.221822+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:37.222008+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:38.222166+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 19759104 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:39.222309+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:40.222456+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:41.222645+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:42.222853+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:43.223116+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:44.223304+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:45.223451+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:46.223609+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:47.223809+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:48.223948+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:49.224102+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:50.224414+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:51.224563+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:52.224760+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:53.224910+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:54.225057+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:55.225189+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:56.225385+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:57.225600+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:58.225802+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:59.225942+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:00.226155+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:01.226332+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:02.226495+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:03.226702+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:04.226952+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:05.227184+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:06.227334+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:07.227565+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:08.227715+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:09.227862+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:10.228028+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:11.228201+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:12.228399+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:13.228539+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:14.228701+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:15.228907+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:16.229104+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:17.229346+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:18.229577+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:19.229747+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:20.229932+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:21.230082+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:22.230272+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 19750912 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:23.230447+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:24.230666+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:25.230878+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:26.231107+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:27.231365+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:28.231640+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:29.231814+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:30.232031+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:31.232201+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:32.232385+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:33.233595+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:34.233758+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:35.233942+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:36.234101+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:37.234310+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:38.234458+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:39.234639+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:40.234779+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19742720 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:41.234912+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:42.235073+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:43.235246+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:44.235412+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:45.235553+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:46.235708+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:47.235942+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:48.236154+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:49.236440+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:50.236669+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:51.236896+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:52.237152+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:53.237332+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19734528 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:54.237497+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:55.237686+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:56.237923+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:57.238131+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:58.238288+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:59.238433+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:00.238600+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:01.238734+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:02.238901+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:03.239127+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:04.239369+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:05.239556+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:06.239805+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:07.240048+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:08.240251+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:09.240470+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:10.240648+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:11.240840+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:12.240995+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:13.241184+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:14.241321+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:15.241587+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:16.241801+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:17.242026+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:18.242349+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:19.242559+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:20.242873+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:21.243043+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:22.243199+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:23.243419+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:24.243582+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:25.243797+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:26.243940+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:27.244188+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:28.244424+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:29.244561+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:30.244739+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 19726336 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:31.244919+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:32.245124+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:33.245298+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:34.245463+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:35.245624+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:36.245783+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:37.245952+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:38.246081+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:39.246300+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:40.246549+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:41.246698+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:42.246889+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:43.247312+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:44.247524+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:45.247830+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:46.248051+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:47.248259+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:48.248461+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:49.248612+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:50.248834+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:51.248974+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:52.249174+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:53.249381+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:54.249581+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:55.249807+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:56.250011+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19718144 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:57.250242+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19709952 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:58.250461+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 19701760 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:59.250695+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 19701760 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:00.250929+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 19701760 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:01.251094+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 19701760 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:02.251271+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 19701760 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:03.251404+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 19701760 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:04.251574+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 19701760 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:05.251786+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:06.251935+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:07.252141+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:08.252285+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:09.252547+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:10.252793+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:11.253006+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:12.253191+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:13.253349+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 19693568 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:14.253586+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:15.253811+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:16.254004+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:17.254229+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:18.254500+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:19.254770+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:20.254994+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:21.255126+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:22.255294+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:23.255507+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:24.255701+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:25.255978+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:26.256134+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 19685376 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:27.256452+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:28.256659+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:29.256832+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:30.257037+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:31.257182+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:32.257392+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:33.257594+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:34.257793+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:35.257994+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:36.258189+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:37.258404+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:38.258626+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:39.258831+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:40.258943+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:41.259063+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:42.259204+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:43.259346+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:44.259608+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:45.259777+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:46.259912+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:47.260114+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:48.260256+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:49.260421+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:50.260590+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:51.260842+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:52.261052+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:53.261275+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 19677184 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:54.261407+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:55.261533+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027630 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:56.261634+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:57.261804+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:58.261982+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:59.262152+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xe31f0/0x1c3000, compress 0x0/0x0/0x0, omap 0x15bb1, meta 0x2bba44f), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:00.262271+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f219389800
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 374.194122314s of 374.369293213s, submitted: 114
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029368 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:01.262392+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 ms_handle_reset con 0x55f219389800 session 0x55f21b98f880
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:02.262549+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:03.262676+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:04.262823+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:05.262992+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:06.263171+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:07.263339+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:08.263528+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:09.263691+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:10.263791+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:11.263941+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:12.264101+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:13.264250+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:14.264433+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:15.264610+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:16.264803+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:17.265061+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:18.265236+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:19.265408+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:20.265582+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:21.265986+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:22.266147+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:23.266276+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:24.266470+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:25.266652+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:26.266864+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:27.267095+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:28.267214+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:29.267445+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:30.267583+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:31.267748+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:32.267900+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:33.268032+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:34.268183+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:35.268296+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:36.268467+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 19668992 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:37.268664+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:38.268802+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:39.268933+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:40.269153+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:41.269323+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:42.269535+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:43.269753+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:44.269907+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:45.270024+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:46.270183+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:47.270347+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:48.270523+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19660800 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:49.270688+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:50.270858+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:51.270972+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:52.271102+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:53.271256+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:54.271398+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:55.271536+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:56.271707+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:57.271911+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:58.272039+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:59.272194+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:00.272356+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:01.272506+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:02.272636+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:03.272781+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:04.272975+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:05.273150+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:06.273338+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:07.273619+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:08.273923+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:09.274212+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:10.274416+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:11.274618+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:12.274856+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:13.275003+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:14.275197+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:15.275333+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:16.275512+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:17.275666+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:18.275770+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4dbf/0x1c8000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:19.275908+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:20.276055+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19652608 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034458 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:21.276189+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: handle_auth_request added challenge on 0x55f21b10c800
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 80.807968140s of 80.817008972s, submitted: 4
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0xe4d9c/0x1c7000, compress 0x0/0x0/0x0, omap 0x15ee8, meta 0x2bba118), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 19644416 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 151 ms_handle_reset con 0x55f21b10c800 session 0x55f21b965c00
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:22.276429+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:23.276620+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:24.276815+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:25.276982+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe697c/0x1c9000, compress 0x0/0x0/0x0, omap 0x161c8, meta 0x2bb9e38), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034879 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:26.277172+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:27.277333+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:28.277464+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:29.277657+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:30.277792+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe697c/0x1c9000, compress 0x0/0x0/0x0, omap 0x161c8, meta 0x2bb9e38), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 151 handle_osd_map epochs [151,152], i have 152, src has [1,152]
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe697c/0x1c9000, compress 0x0/0x0/0x0, omap 0x161c8, meta 0x2bb9e38), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:31.277921+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:32.278059+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:33.278229+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:34.278387+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:35.278567+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:36.278737+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:37.278931+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:38.279082+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _renew_subs
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:39.279238+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:40.279379+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:41.279505+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:42.279692+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:43.279873+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:44.280121+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:45.280307+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:46.280503+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:47.280707+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:48.280886+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:49.281127+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:50.281310+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:51.281474+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:52.281666+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:53.281851+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:54.281992+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:55.282170+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:56.282385+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 19636224 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:57.282559+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 19628032 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:58.282764+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 19628032 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:59.282893+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:00.283065+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:01.283194+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:02.283318+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:03.283478+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:04.284064+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:05.284372+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:06.284855+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:07.285215+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:08.285481+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [3])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:09.285826+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:10.286180+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:11.286465+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:12.286828+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:13.287009+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:14.287238+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:15.287460+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:16.287710+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:17.287996+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:18.288209+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:19.288353+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:20.288513+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:21.288672+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:22.288783+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:23.288960+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:24.289133+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:25.289334+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:26.289599+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:27.289838+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:28.290003+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:29.290145+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:30.290332+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:31.290557+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:32.290848+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:33.291050+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:34.291234+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:35.291572+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:36.291934+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:37.292370+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:38.292505+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:39.292675+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:40.292984+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:41.293150+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:42.293318+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:43.293463+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:44.293606+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:45.293754+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:46.293889+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:47.294151+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:48.294347+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:49.294540+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:50.294758+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:51.294856+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:26 compute-0 ceph-osd[88099]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037653 data_alloc: 218103808 data_used: 23285
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:26 compute-0 ceph-osd[88099]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 6853 writes, 27K keys, 6853 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6853 writes, 1449 syncs, 4.73 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 286 writes, 517 keys, 286 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 286 writes, 138 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:52.294943+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe83fb/0x1cc000, compress 0x0/0x0/0x0, omap 0x164ca, meta 0x2bb9b36), peers [0,1] op hist [])
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 19619840 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:53.295070+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'config diff' '{prefix=config diff}'
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19496960 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'config show' '{prefix=config show}'
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'counter dump' '{prefix=counter dump}'
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'counter schema' '{prefix=counter schema}'
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:54.295182+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 19144704 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: tick
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_tickets
Dec 09 16:44:26 compute-0 ceph-osd[88099]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:55.295291+0000)
Dec 09 16:44:26 compute-0 ceph-osd[88099]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 18857984 heap: 101638144 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:26 compute-0 ceph-osd[88099]: do_command 'log dump' '{prefix=log dump}'
Dec 09 16:44:26 compute-0 ceph-mon[75222]: pgmap v1372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:26 compute-0 ceph-mon[75222]: from='client.14648 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:26 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2143821384' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 09 16:44:26 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2730083786' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 09 16:44:26 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14658 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} v 0)
Dec 09 16:44:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 16:44:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 09 16:44:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3607816633' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:26 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14662 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:26 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} v 0)
Dec 09 16:44:26 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:44:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 09 16:44:27 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366069331' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 09 16:44:27 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14666 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:27 compute-0 ceph-mon[75222]: from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:27 compute-0 ceph-mon[75222]: from='client.14654 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:27 compute-0 ceph-mon[75222]: from='client.14658 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:44:27 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3607816633' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 09 16:44:27 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:44:27 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2366069331' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 09 16:44:27 compute-0 podman[264741]: 2025-12-09 16:44:27.631483359 +0000 UTC m=+0.066612270 container health_status fbe906a4b60641b0d2bc026295092273d9a8b9783389304fc11b10bffae9c692 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:44:27 compute-0 podman[264740]: 2025-12-09 16:44:27.665854033 +0000 UTC m=+0.110580826 container health_status 0a76737b5f2b25872fe2b565002617e1a84450afb10d979302e98619d30d6470 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 16:44:27 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 09 16:44:27 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1546323072' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 09 16:44:27 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 09 16:44:28 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021226606' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 09 16:44:28 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14674 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:28 compute-0 ceph-mon[75222]: pgmap v1373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:28 compute-0 ceph-mon[75222]: from='client.14662 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:28 compute-0 ceph-mon[75222]: from='client.14666 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:28 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1546323072' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 09 16:44:28 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2021226606' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 09 16:44:28 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 09 16:44:28 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767142446' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 09 16:44:28 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:28 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14678 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:29 compute-0 crontab[264994]: (root) LIST (root)
Dec 09 16:44:29 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 09 16:44:29 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069207093' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 09 16:44:29 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14682 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:29 compute-0 ceph-mon[75222]: from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:29 compute-0 ceph-mon[75222]: from='client.14674 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:29 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/767142446' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 09 16:44:29 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2069207093' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 09 16:44:29 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14685 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:30 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14688 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:30 compute-0 ceph-mgr[75515]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 09 16:44:30 compute-0 ceph-67f67f44-54fc-54ea-8df0-10931b6ecdaf-mgr-compute-0-ysegzv[75511]: 2025-12-09T16:44:30.290+0000 7fc8ad494640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 09 16:44:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 09 16:44:30 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478311096' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 09 16:44:30 compute-0 ceph-mon[75222]: pgmap v1374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:30 compute-0 ceph-mon[75222]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:30 compute-0 ceph-mon[75222]: from='client.14682 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:30 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3478311096' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:35.592261+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 892928 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:36.592641+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 884736 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:37.592817+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 884736 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:38.592948+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 876544 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:39.593114+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 876544 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:40.593382+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 876544 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:41.593572+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:42.593821+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:43.594022+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 851968 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:44.594237+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 851968 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:45.594400+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:46.594609+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:47.594833+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:48.595023+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:49.595246+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:50.595456+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:51.595666+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:52.595839+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:53.596022+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:54.596157+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:55.596361+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 811008 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:56.596495+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:57.596631+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:58.596793+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:59.596898+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:00.597102+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:01.597324+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:02.597471+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:03.597638+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:04.597802+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:05.597949+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:06.598061+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:07.598194+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:08.598377+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:09.598504+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:10.598673+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:11.598891+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:12.599049+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:13.599214+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:14.599340+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:15.599493+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:16.599652+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:17.599817+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:18.599979+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:19.600135+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 729088 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:20.600310+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:21.600501+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:22.600663+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:23.600782+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:24.600936+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:25.601068+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 704512 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:26.601181+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 704512 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:27.601337+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 696320 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:28.602146+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 696320 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:29.602845+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 688128 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:30.603093+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 688128 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:31.603347+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 688128 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:32.603524+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 679936 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:33.603690+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 679936 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:34.603869+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 671744 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:35.604002+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 671744 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:36.604163+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 671744 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:37.604348+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 663552 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:38.604585+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 663552 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:39.604770+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 663552 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:40.604925+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 655360 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:41.605146+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 655360 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:42.605282+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 647168 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:43.605479+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 638976 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:44.605676+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 630784 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:45.605887+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 630784 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:46.606119+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 622592 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:47.606260+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 622592 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:48.606388+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 622592 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:49.606599+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 614400 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:50.606784+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 614400 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:51.606976+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 606208 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:52.607108+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 606208 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:53.607235+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 606208 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:54.607385+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 598016 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:55.607545+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 598016 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:56.607756+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 581632 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:57.607902+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 581632 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:58.608295+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 581632 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:59.608568+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 573440 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:00.612300+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 573440 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:01.612666+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 565248 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:02.613009+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 565248 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:03.613159+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 548864 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:04.613281+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 548864 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:05.613882+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 548864 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:06.614007+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 540672 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:07.614162+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 540672 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:08.614305+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 516096 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:09.614485+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 516096 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:10.614631+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 516096 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:11.614810+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 507904 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:12.614987+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 507904 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:13.615151+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 499712 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:14.615298+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 499712 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:15.615431+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 491520 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:16.615612+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 491520 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:17.615745+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 491520 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:18.615950+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 483328 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:19.616153+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 483328 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:20.616283+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 483328 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:21.616515+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 475136 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:22.616686+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 466944 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:23.616863+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 466944 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:24.617046+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 466944 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:25.617197+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 458752 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:26.617315+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 458752 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:27.617459+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 450560 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:28.617562+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:29.617808+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 442368 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:30.617932+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 434176 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:31.618122+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 434176 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:32.618288+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 434176 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:33.618433+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 425984 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:34.618564+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 425984 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:35.618797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 417792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:36.618936+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 417792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:37.619087+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 417792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:38.619224+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 409600 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:39.619410+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 409600 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:40.619807+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 401408 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:41.620454+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 401408 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:42.620604+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 401408 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:43.621489+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 393216 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:44.621819+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 393216 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:45.621963+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 385024 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:46.622297+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 385024 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:47.622570+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 376832 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:48.622842+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 376832 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:49.623090+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 376832 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:50.623321+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 368640 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:51.623516+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 368640 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:52.623819+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 360448 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:53.623969+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 360448 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:54.624228+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 360448 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:55.624364+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 352256 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:56.624495+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 352256 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:57.624625+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 352256 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:58.624867+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 344064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:59.625008+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 344064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:00.625154+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 335872 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:01.625304+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 335872 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:02.625453+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 327680 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:03.625611+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 327680 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:04.625760+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 319488 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:05.625911+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 319488 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:06.626036+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 319488 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:07.626194+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 311296 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:08.626385+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 311296 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:09.626510+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 303104 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:10.626624+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 303104 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:11.626797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 294912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:12.626940+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 294912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:13.627080+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 286720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:14.627306+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 286720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:15.627461+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 286720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:16.627542+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 278528 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:17.627677+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 278528 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:18.627921+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 270336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:19.628050+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 270336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:20.628202+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 270336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:21.628420+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 262144 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:22.628609+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 262144 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:23.628800+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 253952 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:24.628980+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 253952 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:25.629127+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 253952 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:26.629326+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 245760 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:27.629494+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 245760 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:28.629691+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 237568 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:29.629862+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 237568 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:30.630063+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 229376 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:31.630289+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 229376 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:32.630415+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 229376 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:33.630521+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 221184 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:34.630644+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 221184 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:35.630777+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 212992 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:36.630914+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 212992 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:37.631061+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 204800 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:38.631228+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 204800 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:39.631416+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 204800 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:40.631666+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 204800 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:41.632168+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 196608 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:42.632574+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 196608 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:43.632820+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 188416 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:44.632968+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 188416 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:45.633157+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 180224 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:46.633456+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 180224 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6907 writes, 28K keys, 6907 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6907 writes, 1321 syncs, 5.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6907 writes, 28K keys, 6907 commit groups, 1.0 writes per commit group, ingest: 19.91 MB, 0.03 MB/s
                                           Interval WAL: 6907 writes, 1321 syncs, 5.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:47.633704+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 114688 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:48.633941+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 114688 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:49.634113+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 114688 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:50.634426+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 106496 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:51.634669+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 106496 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:52.634819+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 98304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:53.635028+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 98304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:54.635201+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 98304 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:55.635393+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 90112 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:56.635662+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 81920 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:57.635885+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 73728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:58.636124+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 73728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:59.636315+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 73728 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:00.636514+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 65536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:01.636790+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 65536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:02.637032+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 65536 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:03.637331+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 57344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:04.637592+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 57344 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:05.637816+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 49152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:06.638049+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 49152 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:07.638243+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 40960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:08.638459+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 40960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:09.638680+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 40960 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:10.638876+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 32768 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:11.639130+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 32768 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:12.639383+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 32768 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:13.639555+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 24576 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:14.639693+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 24576 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:15.639811+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 16384 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:16.639947+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 16384 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:17.640063+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 8192 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:18.640208+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 8192 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:19.640347+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 8192 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:20.640471+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 0 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:21.640626+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 0 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:22.640839+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:23.640990+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:24.641127+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:25.641282+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:26.641434+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:27.641714+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:28.641888+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:29.642035+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:30.642251+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:31.642517+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:32.642802+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:33.643067+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:34.643264+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:35.643451+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 999424 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:36.643672+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 999424 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:37.644121+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:38.644295+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:39.644825+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:40.645286+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:41.645778+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:42.645990+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:43.646879+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:44.647278+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:45.647530+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 326.887329102s of 326.891876221s, submitted: 2
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:46.647650+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1163264 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:47.647777+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:48.647887+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:49.647985+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:50.648114+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:51.648380+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:52.648557+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:53.648715+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:54.649001+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:55.649229+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1155072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:56.649401+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 1138688 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:57.649552+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 1138688 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:58.649779+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 1138688 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:59.649948+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 1130496 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:00.650141+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 1130496 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:01.650374+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1122304 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:02.650568+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1122304 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:03.650778+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1114112 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:04.650929+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1114112 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:05.651074+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1105920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:06.651259+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1105920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:07.651427+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1105920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:08.651555+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1097728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:09.651697+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1097728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:10.651832+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:11.652220+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:12.652954+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:13.653111+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1081344 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:14.653444+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1081344 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:15.653666+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1073152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:16.654194+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1073152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:17.654615+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:18.655047+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:19.655399+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:20.655544+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1056768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:21.655690+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1056768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:22.655795+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 1048576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:23.655907+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 1048576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:24.656044+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 1048576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:25.656191+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:26.656342+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:27.656505+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:28.656685+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:29.656882+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:30.657078+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:31.657286+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:32.657440+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:33.657566+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:34.657824+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:35.657992+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:36.658120+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:37.658276+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 999424 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:38.658470+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 999424 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:39.658598+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:40.658761+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:41.658907+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:42.659042+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:43.659180+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:44.659317+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:45.659447+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:46.659915+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:47.660388+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:48.660808+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:49.660954+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:50.661177+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:51.661341+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:52.661655+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:53.661823+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:54.661997+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:55.662163+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:56.662282+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:57.662458+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:58.662609+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:59.662820+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:00.662968+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:01.663150+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:02.663274+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:03.663452+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:04.663579+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:05.663731+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:06.663993+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:07.664249+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:08.664379+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:09.664529+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:10.664682+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:11.664927+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:12.665038+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:13.666636+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:14.666828+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:15.666954+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:16.667137+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:17.667288+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:18.667427+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:19.667575+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:20.667779+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:21.667939+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:22.668064+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:23.668223+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:24.668379+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:25.668530+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:26.668690+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:27.668808+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:28.668966+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:29.669106+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:30.669219+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:31.669370+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:32.669516+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:33.669648+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:34.669775+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:35.669951+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:36.670129+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:37.670268+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:38.670401+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:39.670564+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:40.670704+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:41.670933+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:42.671049+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:43.671193+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:44.671364+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:45.671499+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:46.671632+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:47.671817+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:48.671994+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:49.672166+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:50.672379+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:51.673797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:52.673929+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:53.674079+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:54.674625+0000)
Dec 09 16:44:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 09 16:44:30 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286483839' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:55.674789+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:56.674967+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:57.675089+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:58.675838+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:59.675982+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:00.676101+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:01.676255+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:02.676480+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:03.676631+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:04.676810+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:05.676979+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:06.677151+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:07.677314+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:08.677438+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:09.677792+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:10.677926+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:11.678168+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:12.678333+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:13.678546+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:14.678742+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:15.678884+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:16.679039+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:17.679213+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:18.679380+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:19.679572+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:20.679958+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:21.680236+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:22.680506+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:23.680685+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:24.680805+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:25.681139+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:26.681361+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:27.681530+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:28.681699+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:29.681863+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:30.681992+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:31.682158+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:32.682309+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:33.682474+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:34.682673+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:35.682884+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:36.683121+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:37.683236+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:38.683413+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:39.683773+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:40.683933+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:41.684097+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:42.684225+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:43.684413+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:44.684570+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:45.684769+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:46.684933+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:47.685060+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:48.685172+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:49.685387+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:50.685630+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:51.685885+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:52.686019+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:53.686175+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:54.686326+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:55.686476+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:56.686670+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:57.686836+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:58.687113+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:59.687396+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:00.687614+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:01.687811+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:02.687959+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:03.688111+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:04.688277+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:05.688464+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:06.688621+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:07.688771+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:08.688977+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:09.689135+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:10.689320+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:11.689501+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:12.689643+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:13.689815+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:14.690005+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:15.690165+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:16.690302+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:17.690428+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:18.690562+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:19.690684+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:20.690862+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:21.691060+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 892928 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:22.691180+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 892928 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:23.691350+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:24.691531+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:25.691652+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:26.691809+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:27.691953+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:28.692087+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:29.692222+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:30.692366+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:31.692505+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:32.692641+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:33.692811+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:34.692927+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:35.693040+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:36.693145+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:37.693256+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:38.693368+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:39.693501+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:40.693647+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:41.693808+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:42.693913+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:43.694037+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:44.694187+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:45.694331+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:46.694490+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:47.694613+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:48.694797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:49.694930+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:50.695044+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:51.695191+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:52.695327+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:53.695500+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: mgrc ms_handle_reset ms_handle_reset con 0x564029bc6000
Dec 09 16:44:30 compute-0 ceph-osd[87055]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/740356566
Dec 09 16:44:30 compute-0 ceph-osd[87055]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/740356566,v1:192.168.122.100:6801/740356566]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: get_auth_request con 0x56402be53800 auth_method 0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: mgrc handle_mgr_configure stats_period=5
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 ms_handle_reset con 0x564029bc7400 session 0x5640298e2a80
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402995a000
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 ms_handle_reset con 0x56402995a400 session 0x5640298e2e00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x564029bc7400
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:54.695629+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:55.695767+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:56.695887+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:57.696115+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:58.696277+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:59.696443+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:00.696587+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:01.696779+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:02.696914+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:03.697091+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:04.697250+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:05.697417+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:06.697562+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:07.697764+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:08.697937+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:09.699527+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:10.699671+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:11.699831+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:12.699963+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:13.700150+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:14.700334+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:15.700454+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:16.700599+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:17.700847+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:18.700995+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:19.701151+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:20.701308+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:21.701460+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:22.701611+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:23.701726+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:24.701919+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:25.702056+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:26.702203+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:27.702357+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:28.702523+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:29.702700+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:30.702897+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:31.703061+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:32.703236+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:33.703405+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:34.703544+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:35.703702+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:36.703826+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:37.703977+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:38.704119+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:39.704279+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:40.704427+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:41.704565+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:42.704666+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:43.704794+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:44.704926+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 598016 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:45.705069+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.990051270s of 300.132232666s, submitted: 90
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:46.705191+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:47.705336+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:48.705489+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:49.705612+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:50.705782+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:51.705933+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:52.706108+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:53.706288+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:54.706457+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:55.706614+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:56.706806+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:57.706939+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:58.707061+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:59.707181+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:00.707356+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:01.707527+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:02.707690+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:03.707764+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:04.707914+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:05.708020+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:06.708141+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:07.708292+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:08.708425+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:09.708565+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:10.708708+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:11.708920+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:12.709074+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:13.709243+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:14.709354+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:15.709470+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:16.709576+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:17.709691+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:18.709768+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:19.709930+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:20.710081+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:21.710238+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:22.710428+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:23.710618+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:24.710790+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:25.710942+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:26.711117+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:27.711274+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:28.711427+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:29.711585+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:30.711788+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:31.711952+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:32.712113+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:33.712253+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:34.712396+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:35.712529+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:36.712648+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:37.712790+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:38.712901+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:39.719586+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:40.719711+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:41.719831+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:42.720000+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:43.720155+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:44.720284+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:45.720415+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:46.720534+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:47.720676+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:48.720819+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:49.720924+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:50.721066+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:51.721264+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:52.721348+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:53.721483+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:54.721593+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:55.721739+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:56.721856+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:57.721969+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:58.722124+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:59.722292+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:00.722421+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:01.722580+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:02.722681+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:03.722798+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:04.723266+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:05.723413+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:06.723536+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:07.723701+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:08.724139+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:09.724312+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:10.724452+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:11.724624+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:12.724796+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:13.724913+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:14.725028+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:15.725159+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:16.725403+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:17.725536+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:18.725673+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:19.725788+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:20.725951+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:21.726200+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:22.726373+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:23.726534+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:24.726663+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:25.726803+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:26.726966+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:27.727204+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:28.727377+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:29.727753+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:30.728069+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:31.728322+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:32.728553+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:33.728809+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:34.729104+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:35.729283+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:36.729473+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:37.729681+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:38.729880+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:39.730025+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:40.730211+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:41.730385+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:42.730550+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:43.730679+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:44.730815+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:45.730975+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:46.731141+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:47.731291+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:48.731425+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:49.731547+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:50.731767+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:51.731896+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:52.732056+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:53.732182+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:54.732319+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:55.732425+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:56.732564+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:57.732716+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:58.732855+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:59.733004+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:00.733178+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:01.733390+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:02.733508+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:03.733625+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:04.733769+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:05.733907+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:06.734069+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:07.734160+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:08.734304+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:09.738787+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:10.746690+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:11.749529+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:12.753173+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:13.753808+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:14.754057+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:15.755772+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:16.756373+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:17.757890+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:18.758824+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:19.759066+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:20.759662+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:21.759979+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:22.760392+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:23.760572+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:24.760808+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:25.761379+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:26.761772+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:27.761938+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:28.762040+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:29.762287+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:30.762552+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:31.762993+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:32.763281+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:33.763510+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:34.763816+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:35.764083+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:36.764265+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:37.764574+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:38.764780+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:39.765029+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:40.765172+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:41.765380+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:42.765528+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:43.765795+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:44.766030+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:45.766198+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:46.766262+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:47.766401+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:48.766533+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:49.766705+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:50.766888+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:51.767059+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:52.767197+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:53.767324+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:54.767448+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:55.767648+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:56.767797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:57.767993+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:58.768151+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:59.768280+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:00.768408+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:01.768582+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:02.768772+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:03.768908+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:04.769025+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:05.769158+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:06.769311+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:07.769532+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:08.769679+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:09.769821+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:10.770015+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:11.770207+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:12.770340+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:13.770581+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:14.770848+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:15.772161+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:16.772600+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:17.773679+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:18.774009+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:19.774445+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:20.774647+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:21.774978+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:22.775184+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:23.775319+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:24.775473+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:25.775893+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:26.776388+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:27.776900+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:28.777442+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:29.777608+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:30.777877+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:31.778052+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:32.778185+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:33.778357+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:34.778484+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:35.778639+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:36.778800+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:37.778937+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:38.779189+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:39.779423+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:40.779662+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:41.779903+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:42.780083+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:43.780271+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:44.780420+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:45.780613+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:46.780789+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7131 writes, 29K keys, 7131 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7131 writes, 1433 syncs, 4.98 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564027e8d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:47.780961+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:48.781096+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:49.781243+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:50.781406+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:51.781659+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:52.781835+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:53.781944+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:54.782104+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:55.782278+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:56.782411+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:57.782569+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:58.782705+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:59.783311+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:00.783740+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:01.783929+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:02.784062+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:03.784217+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:04.784461+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:05.784640+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:06.784806+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:07.785064+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:08.785223+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:09.785346+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:10.785486+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:11.785688+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:12.785886+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:13.786037+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:14.786206+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:15.786426+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:16.786570+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:17.786685+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:18.786862+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:19.788508+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:20.789861+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:21.790920+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:22.791770+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:23.792484+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:24.793067+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:25.793209+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:26.793435+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:27.793778+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:28.794062+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:29.794366+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:30.794665+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:31.794947+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:32.795146+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:33.795368+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:34.795574+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:35.795772+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:36.795991+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:37.796131+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:38.796283+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:39.796435+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:40.796590+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:41.796753+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:42.796960+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:43.797101+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:44.797259+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:45.797490+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.920989990s of 299.952972412s, submitted: 22
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:46.797680+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:47.797831+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:48.797987+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1925120 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:49.798185+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:50.798429+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:51.798691+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:52.798927+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:53.799082+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:54.799272+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:55.799469+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:56.799622+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:57.799788+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:58.799953+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:59.800161+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:00.800327+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:01.800467+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:02.800653+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:03.800841+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:04.801049+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:05.801236+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:06.801343+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:07.801473+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:08.801885+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:09.802031+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:10.802188+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:11.802366+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:12.802523+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:13.802678+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:14.802808+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:15.802913+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:16.803087+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:17.803257+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:18.803396+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:19.803524+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:20.803697+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:21.804006+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:22.804185+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:23.804362+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:24.804585+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:25.804779+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:26.804989+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:27.805155+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:28.805364+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:29.805534+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:30.805847+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:31.806243+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:32.806457+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:33.806631+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:34.806791+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:35.806957+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:36.807161+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:37.807399+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:38.807589+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:39.807789+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:40.807953+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:41.808197+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:42.808338+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:43.808472+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:44.808607+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:45.808745+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:46.808886+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:47.809097+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:48.809301+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:49.809469+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:50.809657+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:51.809921+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:52.810089+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:53.810283+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:54.810444+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:55.810637+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1908736 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:56.810797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:57.810949+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:58.811107+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:59.811279+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:00.811515+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:01.812007+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:02.812188+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:03.812367+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:04.812629+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:05.813142+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:06.813399+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:07.813674+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:08.813935+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:09.814123+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:10.814315+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:11.814647+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:12.814813+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:13.815016+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:14.815208+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:15.815455+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:16.815630+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:17.815795+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:18.815959+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:19.816193+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:20.816392+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:21.816614+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:22.816872+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:23.817023+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:24.817169+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:25.817311+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:26.817521+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:27.817787+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:28.818144+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:29.818494+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:30.818702+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:31.819113+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:32.819366+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:33.819587+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:34.819794+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:35.819953+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:36.820239+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:37.820428+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:38.820625+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:39.820872+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:40.821122+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:41.821397+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:42.821600+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:43.821819+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:44.822137+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:45.822332+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:46.822582+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:47.822805+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:48.823019+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:49.823212+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:50.823363+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:51.823710+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:52.824820+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:53.824941+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:54.825090+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:55.825259+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:56.825442+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:57.825602+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:58.825785+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:59.825924+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:00.826122+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:01.826374+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:02.826571+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:03.826799+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:04.826981+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:05.827217+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:06.827394+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:07.827645+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:08.827830+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:09.827979+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:10.828162+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:11.828349+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:12.828506+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:13.828821+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:14.829024+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:15.829204+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:16.829414+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:17.829599+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:18.829809+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:19.830015+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:20.830181+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:21.830382+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:22.830563+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:23.830683+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:24.830777+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:25.830960+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:26.831204+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:27.831463+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:28.831623+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:29.831841+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:30.832084+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:31.832274+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:32.832430+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:33.832582+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:34.832779+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:35.832986+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:36.833107+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:37.833265+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:38.833419+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:39.833605+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:40.833766+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:41.833911+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:42.834119+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:43.834282+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:44.834434+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:45.834575+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:46.834714+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:47.834906+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:48.835045+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:49.835182+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:50.835361+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:51.835584+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:52.835841+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:53.836173+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:54.836325+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:55.836454+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:56.836596+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:57.836759+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:58.836976+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:59.837101+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:00.837323+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:01.837524+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:02.837678+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:03.837886+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:04.838057+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:05.838235+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:06.838434+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:07.838633+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:08.838877+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:09.839074+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:10.839320+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:11.839583+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:13.215118+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:14.215267+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:15.215436+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:16.215590+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:17.215797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:18.215984+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:19.216158+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:20.216344+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:21.216496+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:22.216704+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:23.216907+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:24.217106+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:25.217253+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:26.217419+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:27.217568+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:28.217796+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:29.217959+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:30.218140+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:31.218301+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:32.218492+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:33.218630+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:34.218794+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:35.218936+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:36.219072+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 09 16:44:30 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361884452' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:37.219196+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:38.219358+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:39.219516+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:40.219656+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:41.219838+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:42.219997+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:43.220120+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:44.220273+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:45.220420+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:46.220575+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:47.220763+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:48.220949+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:49.221109+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:50.221301+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:51.221451+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:52.221834+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:53.221987+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:54.222229+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:55.222399+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:56.222563+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:57.222704+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:58.222942+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:59.223144+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:00.223350+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:01.223550+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:02.223795+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:03.223956+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:04.224166+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:05.224337+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:06.224524+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:07.224701+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:08.224899+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:09.225079+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:10.225291+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:11.225478+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:12.225672+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:13.225834+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:14.226044+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:15.226196+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:16.226332+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:17.226652+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:18.226817+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:19.226966+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:20.227140+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:21.227433+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1835008 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:22.227649+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:23.227861+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:24.228108+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:25.228228+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:26.228373+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:27.228576+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:28.228751+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:29.228923+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:30.229054+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:31.229305+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:32.229550+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:33.229752+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004667 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:34.229935+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:35.230082+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x132868/0x1fc000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1826816 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:36.230223+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c046800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1810432 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 290.358245850s of 290.774627686s, submitted: 90
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:37.230384+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1802240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fce2a000/0x0/0x4ffc00000, data 0x13440c/0x200000, compress 0x0/0x0/0x0, omap 0x14119, meta 0x2bbbee7), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:38.230585+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 18423808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080112 data_alloc: 218103808 data_used: 12288
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 133 ms_handle_reset con 0x56402c046800 session 0x5640299e0540
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:39.230809+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 18595840 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c046c00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:40.230978+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 18571264 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 134 ms_handle_reset con 0x56402c046c00 session 0x56402c8588c0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:41.231185+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 18571264 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:42.231398+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 18530304 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb9ae000/0x0/0x4ffc00000, data 0x15a9787/0x167a000, compress 0x0/0x0/0x0, omap 0x14f3b, meta 0x2bbb0c5), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:43.231549+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 18530304 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130295 data_alloc: 218103808 data_used: 12307
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:44.231761+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 18530304 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb9ae000/0x0/0x4ffc00000, data 0x15a9787/0x167a000, compress 0x0/0x0/0x0, omap 0x14f3b, meta 0x2bbb0c5), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:45.231917+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 18530304 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:46.232103+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb9ae000/0x0/0x4ffc00000, data 0x15a9787/0x167a000, compress 0x0/0x0/0x0, omap 0x14f3b, meta 0x2bbb0c5), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 18530304 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:47.232296+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 18530304 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:48.232454+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 18530304 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132461 data_alloc: 218103808 data_used: 12307
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c047800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.080212593s of 12.270214081s, submitted: 62
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:49.232570+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18382848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 136 ms_handle_reset con 0x56402c047800 session 0x56402c3136c0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:50.232904+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18382848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:51.233135+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18382848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc61a000/0x0/0x4ffc00000, data 0x93cdf6/0xa10000, compress 0x0/0x0/0x0, omap 0x15901, meta 0x2bba6ff), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:52.233359+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c047c00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 18391040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 137 ms_handle_reset con 0x56402c047c00 session 0x564029a60380
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:53.233548+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 18366464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033184 data_alloc: 218103808 data_used: 12307
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:54.233797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce18000/0x0/0x4ffc00000, data 0x13e9d6/0x212000, compress 0x0/0x0/0x0, omap 0x15efa, meta 0x2bba106), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 18366464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:55.233988+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 18366464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:56.234129+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:57.234310+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:58.234475+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1036086 data_alloc: 218103808 data_used: 12920
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:59.234757+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce15000/0x0/0x4ffc00000, data 0x140471/0x215000, compress 0x0/0x0/0x0, omap 0x16241, meta 0x2bb9dbf), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:00.234954+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:01.236875+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:02.237079+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce15000/0x0/0x4ffc00000, data 0x140471/0x215000, compress 0x0/0x0/0x0, omap 0x16241, meta 0x2bb9dbf), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:03.237213+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1036086 data_alloc: 218103808 data_used: 12920
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:04.237373+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18399232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c45f800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.643180847s of 15.773237228s, submitted: 90
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:05.237527+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 17203200 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 139 ms_handle_reset con 0x56402c45f800 session 0x5640298e3c00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:06.237734+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:07.237920+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:08.238151+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fce10000/0x0/0x4ffc00000, data 0x142179/0x21a000, compress 0x0/0x0/0x0, omap 0x16888, meta 0x2bb9778), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043722 data_alloc: 218103808 data_used: 12920
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:09.238281+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:10.238449+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:11.238597+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:12.238833+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:13.239029+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043722 data_alloc: 218103808 data_used: 12920
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fce10000/0x0/0x4ffc00000, data 0x142179/0x21a000, compress 0x0/0x0/0x0, omap 0x16888, meta 0x2bb9778), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:14.239221+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:15.239371+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c046800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.550939560s of 10.751106262s, submitted: 32
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:16.239545+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:17.239666+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 140 ms_handle_reset con 0x56402c046800 session 0x56402b9d0e00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:18.239826+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x143c35/0x21b000, compress 0x0/0x0/0x0, omap 0x16eaf, meta 0x2bb9151), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044744 data_alloc: 218103808 data_used: 12920
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:19.240005+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:20.240162+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:21.240305+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x143c35/0x21b000, compress 0x0/0x0/0x0, omap 0x16eaf, meta 0x2bb9151), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x143c35/0x21b000, compress 0x0/0x0/0x0, omap 0x16eaf, meta 0x2bb9151), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:22.240571+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:23.240778+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044744 data_alloc: 218103808 data_used: 12920
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:24.240948+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:25.241190+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 17072128 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.494903564s of 10.046417236s, submitted: 49
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:26.241356+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:27.241547+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:28.241657+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:29.241793+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:30.241958+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:31.242123+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:32.242347+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:33.242516+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:34.242657+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:35.242818+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:36.243014+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:37.243202+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:38.243439+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:39.243607+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:40.243831+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:41.244012+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:42.244185+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:43.244341+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:44.244517+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:45.244654+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:46.244777+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:47.244949+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:48.245199+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:49.245330+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:50.245500+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:51.245702+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:52.246007+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:53.246135+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:54.246303+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:55.246472+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:56.246606+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:57.246823+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:58.246987+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:59.247140+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:00.247306+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:01.247459+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:02.247668+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:03.247830+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:04.248044+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:05.248225+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:06.248387+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:07.248633+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:08.248848+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:09.249047+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:10.249234+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:11.249388+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:12.249592+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:13.249769+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:14.249939+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:15.250121+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:16.250265+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:17.250401+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:18.250533+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:19.250677+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:20.250832+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:21.251011+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:22.251237+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:23.251423+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:24.251594+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:25.251758+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:26.251930+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:27.252109+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:28.252224+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:29.252384+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:30.252588+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:31.252809+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:32.253017+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:33.253206+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:34.253392+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:35.253570+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:36.253765+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:37.253919+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:38.254120+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:39.254278+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:40.254444+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:41.254580+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:42.254814+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:43.254943+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:44.255112+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:45.255256+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:46.255385+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:47.255604+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:48.255797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:49.256912+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:50.257069+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:51.257406+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:52.257612+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:53.257854+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:54.258097+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:55.258233+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16023552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:56.258366+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:57.258549+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:58.258741+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:59.258910+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:00.259066+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:01.259288+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:02.259623+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:03.259800+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:04.259980+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:05.260128+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:06.260294+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:07.260479+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:08.260662+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c046c00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16015360 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047438 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:09.260879+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 103.682403564s of 103.690818787s, submitted: 16
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0c000/0x0/0x4ffc00000, data 0x1456b4/0x21e000, compress 0x0/0x0/0x0, omap 0x171f5, meta 0x2bb8e0b), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 142 ms_handle_reset con 0x56402c046c00 session 0x56402c390e00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86753280 unmapped: 16007168 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c047800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:10.261004+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:11.261193+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc198000/0x0/0x4ffc00000, data 0xdb7273/0xe92000, compress 0x0/0x0/0x0, omap 0x1741f, meta 0x2bb8be1), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 143 ms_handle_reset con 0x56402c047800 session 0x56402a207500
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:12.261446+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc198000/0x0/0x4ffc00000, data 0xdb7273/0xe92000, compress 0x0/0x0/0x0, omap 0x1741f, meta 0x2bb8be1), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:13.261705+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120577 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:14.261999+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:15.262149+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:16.262317+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:17.262542+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:18.262776+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc193000/0x0/0x4ffc00000, data 0xdb8e0f/0xe95000, compress 0x0/0x0/0x0, omap 0x176e4, meta 0x2bb891c), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:19.262915+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120577 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 24281088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:20.263048+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c047c00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.433588028s of 11.506912231s, submitted: 15
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 144 ms_handle_reset con 0x56402c047c00 session 0x5640299e0c40
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 24289280 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:21.263176+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 24289280 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:22.263317+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc602000/0x0/0x4ffc00000, data 0x94a9ff/0xa28000, compress 0x0/0x0/0x0, omap 0x179ab, meta 0x2bb8655), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 24289280 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:23.263432+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402a396800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 145 ms_handle_reset con 0x56402a396800 session 0x56402bd85880
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:24.263600+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062202 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:25.263808+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:26.263963+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14c5cc/0x22a000, compress 0x0/0x0/0x0, omap 0x17bd7, meta 0x2bb8429), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:27.264076+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:28.264248+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:29.264380+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062202 data_alloc: 218103808 data_used: 16981
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:30.264545+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:31.264677+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:32.264880+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfd000/0x0/0x4ffc00000, data 0x14e067/0x22d000, compress 0x0/0x0/0x0, omap 0x17f25, meta 0x2bb80db), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:33.265059+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:34.265259+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064896 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c046800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.043392181s of 14.121951103s, submitted: 63
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:35.265413+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfd000/0x0/0x4ffc00000, data 0x14e067/0x22d000, compress 0x0/0x0/0x0, omap 0x17f25, meta 0x2bb80db), peers [0,2] op hist [0,0,0,0,1])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 147 ms_handle_reset con 0x56402c046800 session 0x56402baaf6c0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:36.265554+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fcdf8000/0x0/0x4ffc00000, data 0x14fd5c/0x232000, compress 0x0/0x0/0x0, omap 0x18544, meta 0x2bb7abc), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:37.265813+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:38.265976+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fcdf8000/0x0/0x4ffc00000, data 0x14fd5c/0x232000, compress 0x0/0x0/0x0, omap 0x18544, meta 0x2bb7abc), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fcdf8000/0x0/0x4ffc00000, data 0x14fd5c/0x232000, compress 0x0/0x0/0x0, omap 0x18544, meta 0x2bb7abc), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:39.266223+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072693 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:40.266412+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fcdf8000/0x0/0x4ffc00000, data 0x14fd5c/0x232000, compress 0x0/0x0/0x0, omap 0x18544, meta 0x2bb7abc), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:41.266608+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:42.266828+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:43.266964+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:44.267110+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072693 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:45.267258+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 24117248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fcdf8000/0x0/0x4ffc00000, data 0x14fd5c/0x232000, compress 0x0/0x0/0x0, omap 0x18544, meta 0x2bb7abc), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c047c00
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.577776909s of 10.795786858s, submitted: 26
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 148 ms_handle_reset con 0x56402c047c00 session 0x56402bd84a80
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:46.267420+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:47.267584+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:48.267741+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:49.267881+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073362 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:50.268095+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:51.268270+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fcdf7000/0x0/0x4ffc00000, data 0x15182b/0x233000, compress 0x0/0x0/0x0, omap 0x18bc6, meta 0x2bb743a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:52.268452+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fcdf7000/0x0/0x4ffc00000, data 0x15182b/0x233000, compress 0x0/0x0/0x0, omap 0x18bc6, meta 0x2bb743a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:53.268597+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:54.268802+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073362 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:55.268977+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.949492455s of 10.008740425s, submitted: 39
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf7000/0x0/0x4ffc00000, data 0x15182b/0x233000, compress 0x0/0x0/0x0, omap 0x18bc6, meta 0x2bb743a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:56.269127+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:57.269262+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:58.269467+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:59.269630+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:00.269852+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:01.270008+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:02.270289+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:03.270551+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:04.270742+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:05.271037+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:06.271209+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:07.271348+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:08.271489+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:09.271618+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:10.271845+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:11.272082+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:12.272345+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:13.272518+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:14.272711+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:15.272924+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:16.273079+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:17.273238+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:18.273439+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:19.273688+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:20.273871+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:21.274028+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:22.274225+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:23.274416+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:24.274574+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:25.274776+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:26.274969+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:27.275097+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:28.275259+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:29.275378+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:30.275546+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:31.275641+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:32.275798+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:33.275921+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:34.276046+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:35.276217+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:36.276353+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:37.276489+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:38.276623+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:39.276774+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:40.276942+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:41.277066+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:42.277245+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:43.277402+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:44.277570+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:45.277753+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:46.277941+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:47.278134+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 8023 writes, 31K keys, 8023 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8023 writes, 1829 syncs, 4.39 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 892 writes, 2343 keys, 892 commit groups, 1.0 writes per commit group, ingest: 1.12 MB, 0.00 MB/s
                                           Interval WAL: 892 writes, 396 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:48.278319+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:49.278467+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:50.278620+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:51.278791+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:52.279014+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 ms_handle_reset con 0x564028f3a800 session 0x564027eb2000
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c047800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:53.279149+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24125440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: mgrc ms_handle_reset ms_handle_reset con 0x56402be53800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/740356566
Dec 09 16:44:30 compute-0 ceph-osd[87055]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/740356566,v1:192.168.122.100:6801/740356566]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: get_auth_request con 0x56402c45f800 auth_method 0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: mgrc handle_mgr_configure stats_period=5
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 ms_handle_reset con 0x56402995a000 session 0x56402951a000
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c046400
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 ms_handle_reset con 0x564029bc7400 session 0x56402bc56c40
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402995a000
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:54.279296+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:55.279444+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:56.279831+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:57.279995+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:58.280525+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:59.280895+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:00.281058+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:01.281216+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:02.281391+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:03.281526+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:04.281682+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:05.281882+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:06.282055+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:07.282210+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:08.282342+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:09.282512+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:10.282700+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:11.282889+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:12.283108+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:13.283254+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:14.283386+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:15.283577+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:16.283779+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:17.283966+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:18.284107+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:19.284253+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:20.284428+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:21.284619+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:22.284810+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:23.284982+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:24.285139+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:25.285326+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:26.285510+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:27.285653+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:28.285779+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:29.285965+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:30.286103+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:31.286242+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:32.286375+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:33.286525+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:34.286647+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:35.286830+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:36.286974+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:37.287163+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 23863296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:38.287344+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 ms_handle_reset con 0x564029983c00 session 0x56402a680c40
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x564029942800
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23732224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:39.287602+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23732224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:40.287831+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23732224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:41.288003+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23732224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:42.288257+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23732224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:43.288423+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23732224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:44.288642+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23732224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076136 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf4000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:45.288847+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23732224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 110.327018738s of 110.334861755s, submitted: 14
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:46.288997+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 22675456 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:47.289121+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:48.289330+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:49.289530+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:50.289753+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:51.289931+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:52.290152+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:53.290325+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:54.290449+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:55.290579+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:56.290770+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:57.290939+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:58.291332+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:59.291498+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:00.291681+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:01.291812+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:02.292007+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:03.292164+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:04.292336+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:05.292559+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:06.292781+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:07.292969+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:08.293123+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:09.293251+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:10.293383+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:11.293535+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:12.293754+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:13.293945+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:14.294079+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:15.294197+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:16.294362+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:17.294553+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:18.294700+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:19.294845+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:20.308486+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:21.308707+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:22.308962+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:23.309154+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:24.309326+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:25.309479+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:26.309654+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:27.309832+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:28.310022+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:29.310182+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:30.310353+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:31.310507+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:32.310753+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:33.310915+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:34.311045+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:35.311168+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:36.311338+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:37.311468+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:38.311600+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:39.311738+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:40.311870+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:41.312026+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:42.312620+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:43.312776+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:44.312947+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:45.313202+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:46.313407+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:47.313597+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:48.313895+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:49.314149+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:50.314311+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:51.314486+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:52.314790+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:53.315027+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:54.315231+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:55.315412+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:56.315658+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:57.315815+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:58.316036+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:59.316223+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:00.316460+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:01.316642+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:02.316858+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:03.317033+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:04.317255+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:05.317434+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:06.317633+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:07.317831+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:08.317969+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:09.318172+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:10.318306+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:11.318461+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:12.318637+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:13.318815+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:14.319033+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:15.319192+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:16.319337+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:17.319487+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:18.319624+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:19.319800+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:20.319949+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:21.320080+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:22.320342+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:23.320485+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:24.320697+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:25.320905+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:26.321153+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:27.321325+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:28.321571+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:29.321797+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:30.321984+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:31.322215+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:32.322380+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:33.322521+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:34.322712+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:35.322929+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:36.323375+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:37.323511+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:38.323697+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:39.323883+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:40.324026+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:41.324258+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:42.324448+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:43.324602+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:44.324823+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:45.324950+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:46.325130+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:47.325271+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:48.325411+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:49.325550+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:50.325686+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:51.325826+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:52.326086+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:53.326287+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:54.326452+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:55.326586+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:56.326800+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:57.327009+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:58.327166+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:59.327297+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:00.327486+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:01.327612+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:02.327830+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:03.327983+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:04.328144+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:05.328306+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:06.328497+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:07.328676+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:08.328840+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:09.328988+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:10.329120+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:11.329306+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:12.329481+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:13.329652+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:14.329792+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:15.329990+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:16.330187+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:17.330372+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:18.330557+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:19.330675+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:20.330799+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:21.330946+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:22.331176+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:23.331301+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:24.331431+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:25.331609+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:26.331775+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:27.331942+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:28.332091+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:29.332277+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:30.332447+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:31.332626+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:32.332838+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:33.333005+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:34.333137+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:35.333503+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:36.333676+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:37.333854+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:38.333996+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:39.334127+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:40.334376+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:41.334529+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:42.334754+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:43.334918+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:44.335094+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:45.335232+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:46.335400+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:47.335615+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:48.335785+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:49.335926+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:50.336093+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:51.336230+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:52.336400+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:53.336523+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:54.336672+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:55.336796+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:56.336962+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:57.337101+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:58.337269+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:59.337431+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:00.337681+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:01.337874+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:02.338109+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:03.338296+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:04.338467+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:05.338643+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:06.338828+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:07.339005+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:08.339189+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:09.339405+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:10.339653+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:11.339826+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:12.340047+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:13.340178+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:14.340362+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:15.340544+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:16.340781+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:17.341021+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:18.341279+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:19.341426+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:20.341913+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:21.342077+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:22.342243+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:23.342395+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:24.342529+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:25.342829+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:26.343058+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:27.343337+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:28.343538+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:29.343747+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:30.343999+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:31.344223+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:32.344481+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:33.344635+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:34.344809+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:35.344962+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:36.345129+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:37.345407+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:38.345604+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:39.345790+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:40.345951+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:41.346121+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:42.346285+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:43.346458+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:44.346580+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 22634496 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:45.346841+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:46.347048+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:47.347215+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:48.347398+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:49.347602+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:50.347832+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:51.348146+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:52.348360+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:53.348588+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:54.348750+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:55.348966+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:56.349236+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 22626304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:57.349401+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:58.349534+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:59.349669+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:00.349840+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:01.349975+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:02.350217+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:03.350532+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:04.350796+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:05.351029+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:06.351224+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:07.351456+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:08.351644+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:09.351862+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:10.352045+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:11.352310+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:12.352530+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:13.352779+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:14.352954+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:15.353113+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:16.353319+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:17.353636+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:18.353918+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:19.354146+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:20.354315+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:21.354495+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:22.354816+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:23.355025+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:24.355190+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:25.355378+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:26.355532+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:27.355678+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:28.355877+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:29.356087+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:30.356311+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:31.356521+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:32.356814+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:33.357011+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:34.357141+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:35.357326+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:36.357490+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:37.357646+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:38.357868+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:39.358006+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 22609920 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:40.358181+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:41.358349+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:42.358541+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:43.358813+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:44.359005+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:45.359247+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:46.359487+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:47.359680+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:48.359813+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:49.359989+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:50.360144+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:51.360318+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:52.360506+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:53.360688+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:54.360842+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:55.360999+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:56.361179+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:57.361362+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:58.361503+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:59.361691+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:00.361853+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:01.361999+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:02.362142+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:03.362264+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:04.362558+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:05.362713+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:06.362863+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:07.363032+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:08.363211+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:09.363347+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:10.363563+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:11.363775+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:12.363989+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:13.364176+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:14.364373+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:15.364549+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:16.364706+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:17.364909+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:18.365093+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:19.365251+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:20.365462+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:21.365605+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:22.365786+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:23.365964+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:24.366119+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:25.366306+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:26.366465+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:27.366630+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:28.366783+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:29.366939+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:30.367117+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:31.367291+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:32.367496+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:33.367638+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:34.367886+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:35.368070+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:36.368222+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:37.368361+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:38.368499+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:39.368665+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:40.368766+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:41.368882+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:42.369056+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:43.369202+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:44.369457+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:45.369656+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:46.369838+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:47.370046+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:48.370220+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:49.370420+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:50.370593+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:51.370847+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:52.371097+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:53.371274+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:54.371433+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:55.371587+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075416 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:56.371989+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 22601728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:57.372102+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fcdf6000/0x0/0x4ffc00000, data 0x1532aa/0x236000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 22593536 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:58.372226+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 22593536 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:59.372374+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 22593536 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:00.372604+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x564029942400
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 374.217529297s of 374.383392334s, submitted: 112
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160497 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 30892032 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:01.372707+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc5f5000/0x0/0x4ffc00000, data 0x9532cd/0xa37000, compress 0x0/0x0/0x0, omap 0x18ee7, meta 0x2bb7119), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 150 ms_handle_reset con 0x564029942400 session 0x56402a207880
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:02.372851+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:03.373028+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:04.373165+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:05.373310+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:06.373511+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:07.373614+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:08.373770+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:09.373974+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:10.374103+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:11.374247+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:12.374700+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:13.374860+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:14.374975+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:15.375130+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:16.375297+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:17.375438+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:18.375630+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:19.375848+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:20.376042+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:21.376195+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:22.376349+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:23.376441+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:24.376556+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:25.376690+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:30 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:26.376890+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:27.377021+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:28.377199+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:29.377310+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:30 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:30.377439+0000)
Dec 09 16:44:30 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:30 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:31.377577+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:32.377772+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:33.377903+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:34.378059+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:35.378166+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:36.378336+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:37.378505+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:38.378650+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:39.378764+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:40.378975+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:41.379155+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:42.379431+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:43.379589+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:44.379702+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:45.379833+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:46.380055+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:47.380242+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:48.380404+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:49.380591+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:50.380814+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:51.380949+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:52.381081+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:53.381249+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:54.381409+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:55.381527+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:56.381694+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 30875648 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:57.381857+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:58.381977+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:59.382225+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:00.382438+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:01.382575+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:02.382772+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:03.382955+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:04.383157+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:05.383334+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:06.383487+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:07.383669+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:08.383847+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:09.384052+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:10.384231+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:11.384431+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:12.384614+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:13.384796+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:14.384922+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:15.385045+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:16.385192+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163935 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:17.385345+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0x1154e69/0x123a000, compress 0x0/0x0/0x0, omap 0x19186, meta 0x2bb6e7a), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:18.385489+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:19.385627+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:20.385770+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: handle_auth_request added challenge on 0x56402c838000
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:21.385891+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 80.697509766s of 80.781959534s, submitted: 11
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163175 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 30892032 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _renew_subs
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fbded000/0x0/0x4ffc00000, data 0x1156a59/0x123d000, compress 0x0/0x0/0x0, omap 0x19427, meta 0x2bb6bd9), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 151 ms_handle_reset con 0x56402c838000 session 0x56402c86dc00
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:22.386051+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fbded000/0x0/0x4ffc00000, data 0x1156a59/0x123d000, compress 0x0/0x0/0x0, omap 0x19427, meta 0x2bb6bd9), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:23.386221+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 30867456 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcded000/0x0/0x4ffc00000, data 0x156a36/0x23c000, compress 0x0/0x0/0x0, omap 0x19427, meta 0x2bb6bd9), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:24.386395+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 30851072 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:25.386551+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 30851072 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:26.386691+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085513 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 30851072 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:27.386849+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 30851072 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:28.387055+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 30851072 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcded000/0x0/0x4ffc00000, data 0x156a36/0x23c000, compress 0x0/0x0/0x0, omap 0x19427, meta 0x2bb6bd9), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:29.387222+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 30851072 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:30.387369+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 30851072 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:31.387528+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcded000/0x0/0x4ffc00000, data 0x156a36/0x23c000, compress 0x0/0x0/0x0, omap 0x19427, meta 0x2bb6bd9), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:32.387796+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:33.387951+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:34.388117+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:35.388258+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:36.388448+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:37.388566+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:38.388708+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:39.388875+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:40.389016+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:41.389194+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:42.389357+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:43.389504+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:44.389665+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:45.389853+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:46.390026+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:47.390177+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:48.390377+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:49.390521+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:50.390686+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:51.390919+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:52.391073+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:53.391280+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:54.391462+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:55.391598+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:56.391835+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 30842880 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:57.392030+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:58.392187+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:59.392357+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:00.392490+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:01.392639+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:02.392829+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:03.393196+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:04.393546+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:05.393948+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:06.394176+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:07.394714+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:08.395092+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:09.395266+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:10.395433+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:11.395615+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:12.395861+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:13.396011+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:14.396378+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:15.396752+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:16.396941+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:17.397195+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:18.397337+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:19.397481+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:20.397666+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:21.397817+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:22.397980+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:23.398152+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:24.398354+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:25.398586+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:26.398799+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:27.399009+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:28.399294+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:29.399532+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:30.399800+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:31.400047+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:32.400302+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:33.400579+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:34.400788+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:35.401284+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:36.401650+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:37.401915+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:38.402127+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:39.402278+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:40.402442+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:41.402609+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:42.402856+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:43.402972+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:44.403101+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:45.403252+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:46.403398+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:47.403562+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 8336 writes, 32K keys, 8336 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8336 writes, 1976 syncs, 4.22 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 313 writes, 736 keys, 313 commit groups, 1.0 writes per commit group, ingest: 0.31 MB, 0.00 MB/s
                                           Interval WAL: 313 writes, 147 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:48.403704+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:49.403850+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:50.403965+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:51.404068+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:52.404244+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 30834688 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:53.404366+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 30826496 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:54.404468+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 30826496 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:55.404567+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 30826496 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:56.404759+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:31 compute-0 ceph-osd[87055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:31 compute-0 ceph-osd[87055]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087114 data_alloc: 218103808 data_used: 21042
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 30826496 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:57.404892+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fcdeb000/0x0/0x4ffc00000, data 0x1584b5/0x23f000, compress 0x0/0x0/0x0, omap 0x1968b, meta 0x2bb6975), peers [0,2] op hist [])
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'config diff' '{prefix=config diff}'
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 30687232 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'config show' '{prefix=config show}'
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'counter dump' '{prefix=counter dump}'
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:58.405077+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'counter schema' '{prefix=counter schema}'
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 30392320 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:59.405236+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: prioritycache tune_memory target: 4294967296 mapped: 89456640 unmapped: 30097408 heap: 119554048 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: tick
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_tickets
Dec 09 16:44:31 compute-0 ceph-osd[87055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:44:00.405350+0000)
Dec 09 16:44:31 compute-0 ceph-osd[87055]: do_command 'log dump' '{prefix=log dump}'
Dec 09 16:44:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:31 compute-0 rsyslogd[1004]: imjournal from <np0005552052:ceph-osd>: begin to drop messages due to rate-limiting
Dec 09 16:44:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 09 16:44:31 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647611119' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 09 16:44:31 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275281231' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: from='client.14685 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: from='client.14688 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4286483839' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1361884452' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1647611119' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2275281231' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 09 16:44:31 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002810418' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 09 16:44:31 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 09 16:44:31 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329881841' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 09 16:44:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 09 16:44:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026269868' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 09 16:44:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 09 16:44:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955225456' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 09 16:44:32 compute-0 ceph-mon[75222]: pgmap v1375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:32 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3002810418' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 09 16:44:32 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/329881841' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 09 16:44:32 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4026269868' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 09 16:44:32 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1955225456' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 09 16:44:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 09 16:44:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286320831' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 09 16:44:32 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:32 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 09 16:44:32 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59969418' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 09 16:44:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 09 16:44:33 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320242428' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 09 16:44:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 09 16:44:33 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2931300976' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 09 16:44:33 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4286320831' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 09 16:44:33 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/59969418' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 09 16:44:33 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3320242428' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 09 16:44:33 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2931300976' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 09 16:44:33 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 09 16:44:33 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336925393' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 09 16:44:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 09 16:44:34 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78706060' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 09 16:44:34 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 09 16:44:34 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195317751' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 09 16:44:34 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14722 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:34 compute-0 ceph-mon[75222]: pgmap v1376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:34 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/336925393' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 09 16:44:34 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/78706060' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 09 16:44:34 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/1195317751' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 09 16:44:34 compute-0 ceph-mon[75222]: from='client.14722 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:34 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14724 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:34 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:34 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14726 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:35 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14728 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 122880 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:45.413044+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 122880 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:46.413264+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 114688 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:47.413474+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 114688 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:48.413642+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 114688 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:49.413786+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 106496 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:50.413984+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 106496 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:51.414123+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 98304 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:52.414250+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 98304 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:53.414392+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 90112 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:54.414570+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 90112 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:55.414750+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 90112 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:56.414894+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 81920 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:57.415066+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 81920 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:58.415194+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 73728 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:10:59.415288+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 57344 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:00.415437+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 57344 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:01.415534+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 49152 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:02.415671+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 49152 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:03.422642+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 40960 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:04.422817+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 32768 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:05.422983+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 32768 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:06.423176+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 24576 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:07.423331+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 24576 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:08.423465+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 16384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:09.423589+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 24576 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:10.423772+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 16384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:11.423902+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 16384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:12.424016+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 16384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:13.424154+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 8192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:14.424293+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 8192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:15.424483+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 0 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:16.424633+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 0 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:17.424797+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:18.424943+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:19.425110+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:20.425260+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:21.425383+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:22.425537+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:23.425657+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1007616 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:24.425867+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1007616 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:25.426117+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1007616 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:26.427069+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:27.427605+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:28.429215+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:29.429504+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 991232 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:30.429786+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 991232 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:31.430137+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:32.430393+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:33.430663+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 974848 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:34.430929+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 974848 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:35.431195+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 974848 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:36.431330+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 966656 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:37.431548+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 966656 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:38.431802+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 958464 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:39.431955+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 942080 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:40.432155+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 942080 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:41.432355+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:42.432533+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:43.432707+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:44.432879+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:45.433009+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 917504 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:46.433196+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 917504 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:47.433399+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 917504 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:48.433605+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 909312 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:49.433767+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:50.433923+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 909312 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:51.434059+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:52.434257+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:53.434460+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:54.434614+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:55.434806+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:56.435002+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:57.435198+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 884736 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:58.435379+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 884736 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:11:59.435530+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 876544 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:00.435998+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 860160 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:01.436454+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 851968 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:02.436796+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 851968 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:03.436965+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 843776 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:04.437290+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 843776 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:05.437638+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 843776 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:06.437910+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 835584 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:07.438148+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 835584 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:08.438629+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 827392 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:09.438824+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 827392 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:10.438980+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 827392 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:11.439144+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:12.439283+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:13.439529+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 811008 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:14.439680+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 811008 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:15.439789+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 802816 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:16.439932+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 794624 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:17.440236+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 794624 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:18.440368+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 794624 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:19.440491+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 778240 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:20.440606+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 778240 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:21.440770+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:22.440933+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:23.441057+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:24.441201+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:25.441411+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:26.441608+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:27.441799+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:28.441897+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:29.441974+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:30.442134+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:31.442310+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:32.442446+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:33.442597+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:34.442736+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:35.442895+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:36.443039+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:37.443295+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:38.443496+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:39.443627+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:40.443939+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 712704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:41.444352+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 712704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:42.444667+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 712704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:43.445095+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 704512 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:44.445443+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 704512 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:45.445708+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:46.445941+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:47.446228+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:48.446500+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:49.446699+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:50.446914+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:51.447069+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:52.447221+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 663552 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:53.447361+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 663552 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:54.447516+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:55.447758+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:56.447943+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:57.448138+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:58.448271+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:12:59.448399+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:00.448566+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:01.448662+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:02.448815+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:03.449003+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:04.449190+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:05.449386+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:06.449663+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:07.449954+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:08.450151+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:09.450323+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:10.450503+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:11.450678+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:12.450878+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:13.451091+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:14.451268+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:15.451455+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:16.451687+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:17.452025+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:18.452250+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:19.452484+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:20.452778+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:21.452952+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 532480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:22.453156+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 532480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:23.453426+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 532480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:24.453642+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:25.453872+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 524288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:26.454163+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:27.454423+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:28.454655+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:29.454818+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:30.454995+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:31.455180+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:32.455340+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:33.455496+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:34.455633+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:35.455769+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:36.455955+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:37.456156+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:38.456415+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:39.456576+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:40.456736+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:41.456871+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 442368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:42.456998+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 442368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5638 writes, 24K keys, 5638 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5638 writes, 906 syncs, 6.22 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5638 writes, 24K keys, 5638 commit groups, 1.0 writes per commit group, ingest: 18.89 MB, 0.03 MB/s
                                           Interval WAL: 5638 writes, 906 syncs, 6.22 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:43.457135+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:44.457344+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 360448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:45.457560+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:46.458326+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:47.458582+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:48.458781+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:49.458909+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:50.459048+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:51.459194+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:52.459384+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:53.459542+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 303104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:54.459708+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:55.459842+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:56.460006+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:57.460200+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:58.460384+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:13:59.460550+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 278528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:00.460688+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:01.460778+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 278528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:02.460948+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 278528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:03.461150+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:04.461308+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:05.461480+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:06.461620+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:07.461845+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 253952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:08.461979+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 253952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:09.462137+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:10.478089+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:11.478222+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:12.478371+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:13.478494+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:14.478623+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 229376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:15.478802+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:16.478950+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:17.479115+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:18.479266+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:19.479531+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 212992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:20.479685+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:21.480101+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 196608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:22.480239+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 196608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:23.480347+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:24.480519+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:25.480668+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:26.480876+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:27.481186+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:28.481343+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:29.481487+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:30.481646+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 163840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:31.481823+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:32.481964+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 155648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:33.482165+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 147456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:34.482396+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 147456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:35.482522+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 139264 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:36.482706+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 139264 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:37.482941+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:38.483142+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:39.483311+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 131072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:40.483530+0000)
Dec 09 16:44:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:41.483665+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:42.483801+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 122880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:43.483937+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:44.484211+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:45.484406+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 291.901184082s of 292.123352051s, submitted: 7
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:46.484751+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 360448 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:47.484931+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:48.485160+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:49.485453+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:50.485781+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:51.485954+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:52.486181+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:53.486379+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:54.486782+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:55.487081+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:56.487255+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:57.487469+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:58.487694+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1392640 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:14:59.487930+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1392640 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:00.488176+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:01.488529+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1384448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:02.488849+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1376256 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:03.489034+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1376256 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:04.489191+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1376256 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:05.489421+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1368064 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:06.489615+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1368064 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:07.489791+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1359872 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:08.489961+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1359872 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:09.490137+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1359872 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:10.490324+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1351680 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:11.490490+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1351680 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:12.490712+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1343488 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:13.490991+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1318912 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:14.491237+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1310720 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:15.491455+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1310720 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:16.491666+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1310720 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:17.491874+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1302528 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:18.492039+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1302528 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:19.492245+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1286144 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:20.492387+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1286144 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:21.492522+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1286144 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:22.492670+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 1277952 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:23.492825+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1269760 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:24.493049+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1261568 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:25.493237+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1261568 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:26.493409+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 1253376 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:27.493624+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 1253376 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:28.493940+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 1253376 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:29.494132+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1245184 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:30.494475+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1245184 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:31.494757+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1236992 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:32.495000+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1236992 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:33.495196+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1228800 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:34.495397+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1228800 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:35.495600+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1228800 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:36.495803+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1220608 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:37.495976+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1220608 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:38.496155+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1212416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:39.496289+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1212416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:40.496441+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1212416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:41.496586+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1204224 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:42.496736+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1204224 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:43.496875+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1187840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:44.497251+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1179648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:45.497523+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1179648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:46.497886+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1179648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:47.498085+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1179648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:48.498246+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1171456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:49.498426+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1171456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:50.498677+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1171456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:51.498950+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1171456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:52.499159+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1171456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:53.499385+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1163264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:54.499680+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1163264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:55.499807+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1163264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:56.499986+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:57.500183+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:58.500327+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:15:59.500527+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:00.500673+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:01.500844+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:02.501011+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:03.501144+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1146880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:04.501316+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1138688 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:05.501487+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:06.501654+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:07.501772+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:08.501890+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:09.502055+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:10.502178+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:11.502295+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:12.502933+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:13.503060+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:14.503225+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:15.503365+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:16.503533+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:17.503788+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:18.504010+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:19.504178+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:20.504338+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:21.504478+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:22.504622+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:23.504773+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1097728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:24.504906+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1097728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:25.505086+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1097728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:26.505205+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1097728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:27.505394+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1097728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:28.505531+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1097728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:29.505704+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1089536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:30.505880+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:31.506028+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:32.506215+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:33.506499+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:34.506641+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:35.506774+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:36.506945+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:37.507124+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:38.507361+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:39.507697+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:40.507827+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:41.508025+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:42.508146+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:43.508301+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:44.508487+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:45.508644+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:46.508845+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:47.509064+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:48.509221+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:49.509388+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:50.509561+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:51.509710+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:52.509880+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:53.510033+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1064960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:54.510177+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1064960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:55.510317+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1064960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:56.510483+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1064960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:57.510685+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:58.510785+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:16:59.510926+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:00.511071+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:01.511207+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:02.511978+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:03.512128+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:04.512313+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:05.512472+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:06.512616+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:07.512946+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:08.513192+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:09.513340+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:10.513525+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:11.513659+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:12.513799+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:13.513925+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:14.514066+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:15.514267+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:16.514392+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:17.514640+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:18.514785+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:19.514940+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:20.515091+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:21.515217+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:22.515340+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:23.515476+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:24.515628+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:25.515867+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:26.515996+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:27.516188+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:28.516693+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:29.516816+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:30.517126+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:31.517518+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:32.517776+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:33.518022+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1032192 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:34.518149+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:35.518302+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:36.518435+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:37.518814+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:38.518976+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:39.519249+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:40.519436+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:41.519832+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:42.519966+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:43.520155+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:44.520360+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:45.520868+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:46.521024+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:47.521215+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:48.521394+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:49.521615+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:50.521790+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:51.521958+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:52.522749+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:53.523223+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:54.523413+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:55.523579+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:56.523846+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:57.524130+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:58.524316+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:17:59.524588+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:00.524762+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:01.524902+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:02.525089+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:03.525226+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:04.525459+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:05.525606+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:06.525824+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:07.526018+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 999424 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:08.526229+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:09.526353+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:10.526761+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:11.526894+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:12.527073+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:13.527269+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:14.527480+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:15.527660+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:16.527858+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:17.528300+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 991232 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:18.528480+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:19.528649+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:20.528782+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:21.529053+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:22.529176+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:23.529308+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:24.529428+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:25.529578+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:26.529842+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:27.530019+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:28.530139+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:29.530275+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:30.530412+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:31.530552+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:32.530672+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:33.530786+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:34.530916+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:35.531034+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:36.531179+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:37.531403+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:38.531576+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 966656 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:39.531775+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 966656 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:40.531969+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 966656 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:41.532158+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 958464 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:42.532320+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 958464 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:43.532486+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 950272 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:44.532696+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 950272 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:45.532881+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 950272 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:46.533034+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 950272 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:47.533179+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 950272 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:48.533311+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 942080 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:49.533488+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: mgrc ms_handle_reset ms_handle_reset con 0x5567e63a2000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/740356566
Dec 09 16:44:35 compute-0 ceph-osd[86013]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/740356566,v1:192.168.122.100:6801/740356566]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: get_auth_request con 0x5567e6b15800 auth_method 0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: mgrc handle_mgr_configure stats_period=5
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 491520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:50.533690+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 491520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:51.533848+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 491520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:52.534204+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 491520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:53.534384+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 475136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 ms_handle_reset con 0x5567e63a3000 session 0x5567e6ab2e00
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e854a400
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:54.534527+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 475136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:55.534692+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 475136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:56.534794+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 458752 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:57.534990+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 458752 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:58.535130+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 458752 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:18:59.535274+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 458752 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:00.535456+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 458752 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:01.535600+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 458752 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:02.535764+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 458752 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:03.535928+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:04.536068+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:05.536232+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:06.536409+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:07.536595+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:08.536747+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:09.536920+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:10.537089+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:11.537224+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:12.537393+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:13.537523+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:14.537635+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:15.537818+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:16.537953+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:17.538113+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:18.538241+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:19.538409+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:20.538588+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:21.538702+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:22.538898+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:23.539020+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:24.539159+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:25.539264+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:26.539400+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:27.539580+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:28.540003+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:29.540148+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:30.540272+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:31.540407+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:32.540566+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:33.540782+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:34.540973+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:35.541154+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:36.541452+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:37.541623+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:38.541710+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:39.541900+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:40.542064+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:41.542202+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013121 data_alloc: 218103808 data_used: 6232
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:42.542329+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:43.542442+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:44.542578+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:45.542701+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.949066162s of 300.105346680s, submitted: 106
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e6f32000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:46.542903+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:47.543068+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:48.543210+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:49.543333+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:50.543460+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:51.543642+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:52.543818+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:53.543957+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:54.544085+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 172032 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:55.544230+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 172032 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:56.544387+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 163840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:57.544562+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 163840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:58.544701+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 163840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:19:59.545059+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 163840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:00.545259+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 163840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:01.545403+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 163840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:02.545551+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 163840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:03.545680+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 155648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:04.545843+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 155648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:05.545979+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 155648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:06.546137+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 155648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:07.546348+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 155648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:08.546469+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 147456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:09.546640+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 147456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:10.546834+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 147456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:11.546951+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 147456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:12.547099+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 147456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:13.547239+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:14.547357+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:15.547508+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:16.547625+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:17.547810+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:18.547938+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:19.548062+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:20.548193+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:21.548327+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:22.548492+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:23.548665+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:24.548842+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:25.548972+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:26.549125+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:27.549301+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:28.549463+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:29.549630+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:30.549788+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:31.549962+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:32.550085+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:33.550218+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:34.550347+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:35.550495+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:36.550626+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:37.550786+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:38.550925+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:39.551067+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:40.551189+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:41.551330+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:42.551474+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:43.551610+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:44.551816+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:45.551994+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:46.552170+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:47.552371+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:48.552501+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:49.552654+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:50.552783+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:51.552947+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:52.553088+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:53.553208+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:54.553331+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:55.553451+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:56.553563+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 122880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:57.553707+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 106496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:58.553914+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 106496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:20:59.554054+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 106496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:00.554182+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:01.554290+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:02.554398+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:03.554512+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:04.554693+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:05.555492+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:06.555656+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:07.555935+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:08.556132+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:09.556260+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 90112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:10.556441+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 90112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:11.556819+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 90112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:12.557424+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 90112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:13.557619+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 90112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:14.557922+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 90112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:15.558079+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:16.558222+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:17.558460+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:18.558588+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:19.558771+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:20.558916+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:21.559067+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:22.559404+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:23.559529+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:24.559708+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 73728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:25.559944+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14730 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:26.560172+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:27.560427+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:28.560573+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:29.560827+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:30.561107+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:31.561259+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:32.561485+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:33.561650+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:34.561964+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 65536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:35.562124+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:36.562374+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:37.562632+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:38.562771+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:39.562941+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:40.563101+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:41.563200+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} v 0)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:42.563357+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:43.563474+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:44.563619+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:45.563802+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:46.563950+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:47.564140+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:48.564281+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 49152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:49.564410+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 40960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:50.564552+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 40960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:51.564636+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 40960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:52.564867+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 40960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:53.565026+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 40960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:54.565153+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 32768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:55.565279+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:56.565437+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:57.565607+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:58.565788+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:21:59.565924+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:00.566066+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:01.566227+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:02.566394+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:03.566543+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:04.566678+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:05.566850+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:06.566995+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:07.567208+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:08.567363+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:09.567533+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:10.569624+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:11.570199+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:12.571386+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:13.571805+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:14.572550+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:15.572960+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:16.573527+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:17.574133+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:18.574390+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:19.574863+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:20.575240+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:21.575538+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:22.575811+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:23.576065+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:24.576318+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:25.576479+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:26.576618+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:27.577160+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:28.577352+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:29.577477+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:30.577650+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:31.577812+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:32.578000+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:33.578214+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:34.578366+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:35.578594+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:36.578871+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:37.579106+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:38.579309+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:39.579538+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:40.579706+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:41.579889+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:42.580092+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:43.580183+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:44.580310+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:45.580497+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:46.580659+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:47.580841+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:48.580980+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:49.581147+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:50.581311+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:51.581449+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:52.581585+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:53.581703+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 16384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:54.581901+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 8192 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:55.582109+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 8192 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:56.582233+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:57.582452+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:58.582612+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:22:59.582763+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:00.582927+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:01.583043+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:02.583224+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:03.583397+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:04.583559+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:05.583754+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:06.583883+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:07.584089+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:08.584238+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:09.584429+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:10.584586+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:11.584801+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:12.584945+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:13.585141+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:14.585310+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:15.585494+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:16.585798+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:17.586079+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:18.586270+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:19.586468+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:20.586660+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:21.586838+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:22.587002+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:23.587272+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:24.587393+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:25.587524+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:26.587705+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:27.588265+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1040384 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:28.588464+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:29.588670+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:30.588795+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:31.588916+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:32.589113+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:33.589238+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:34.589379+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:35.589511+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:36.589801+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:37.589981+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:38.590208+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:39.591094+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:40.591260+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:41.591390+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1032192 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:42.591542+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5886 writes, 24K keys, 5886 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5886 writes, 1030 syncs, 5.71 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5567e481b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:43.591762+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:44.591966+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:45.592125+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:46.592439+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:47.592666+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:48.592787+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:49.592935+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:50.593068+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:51.593606+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:52.593968+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:53.594192+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 999424 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:54.594804+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:55.594949+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 983040 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:56.595072+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:57.595235+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:58.595354+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:23:59.595593+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:00.595831+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:01.595941+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:02.596089+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:03.596279+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:04.596461+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:05.596602+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:06.596784+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:07.596975+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:08.597122+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:09.597253+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:10.597389+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:11.597534+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:12.597695+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:13.597897+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:14.598065+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:15.598221+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:16.598452+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:17.598699+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:18.598943+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:19.599433+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:20.599810+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:21.599957+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:22.600709+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:23.601494+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:24.602299+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:25.602497+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:26.602790+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:27.602967+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:28.603434+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:29.603856+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:30.604077+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:31.604508+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:32.604833+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:33.605073+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:34.605275+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:35.605553+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:36.605767+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:37.606094+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:38.606268+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:39.606399+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:40.606598+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:41.606746+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:42.606905+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:43.607113+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:44.607266+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:45.607432+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 974848 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.954681396s of 299.979553223s, submitted: 18
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:46.607613+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 892928 heap: 76455936 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:47.607822+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:48.607967+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:49.608104+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:50.608347+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:51.608524+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:52.608709+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:53.608898+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:54.609046+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:55.609200+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:56.609347+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:57.609532+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:58.609805+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:24:59.609970+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:00.610160+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:01.610300+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:02.610506+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:03.610676+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:04.610846+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:05.611055+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:06.611247+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:07.611463+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:08.611672+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:09.611840+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:10.611977+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:11.612146+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:12.612311+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:13.612548+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:14.612684+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:15.612874+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:16.613101+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:17.613357+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:18.613529+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:19.613795+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:20.614021+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:21.614182+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:22.614350+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:23.614509+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:24.614774+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:25.614928+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:26.615108+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:27.615553+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:28.615782+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:29.615937+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:30.616134+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:31.616325+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:32.616483+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:33.616712+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:34.616981+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:35.617154+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:36.617320+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:37.617529+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:38.617679+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:39.617833+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:40.618059+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:41.618229+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:42.618378+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:43.618554+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:44.618752+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:45.618925+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:46.619045+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:47.619194+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:48.619364+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:49.619516+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:50.619680+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:51.619834+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:52.619996+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:53.620180+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:54.620356+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:55.620503+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 892928 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:56.620773+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:57.621064+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:58.621183+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:25:59.621367+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:00.621585+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:01.621768+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:02.621963+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:03.622150+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:04.622295+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:05.622484+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:06.622608+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:07.622767+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:08.622906+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:09.623092+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:10.623247+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:11.623424+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:12.623566+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:13.623755+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:14.623976+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:15.624190+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:16.624402+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:17.624623+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:18.624799+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:19.625018+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:20.625194+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:21.625370+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:22.625606+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:23.625786+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:24.625967+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:25.626161+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:26.637953+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:27.638210+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:28.638388+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:29.638517+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:30.638877+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:31.639154+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:32.639529+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:33.639798+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:34.639991+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:35.640220+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:36.640354+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:37.640637+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:38.640871+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:39.640979+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:40.641149+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:41.641464+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:42.641696+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:43.641790+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:44.641990+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:45.642119+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:46.642324+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:47.642528+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:48.642696+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:49.642825+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:50.642954+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:51.643156+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:52.643303+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:53.643456+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:54.643613+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:55.643767+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:56.643986+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:57.644280+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:58.644477+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:26:59.644640+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:00.644778+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:01.645008+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:02.645225+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:03.645422+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:04.645589+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:05.645763+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:06.645916+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:07.646162+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:08.646367+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:09.646555+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:10.646927+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:11.647117+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:12.647324+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:13.647487+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:14.647668+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:15.647858+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:16.648045+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:17.648216+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:18.648371+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:19.648562+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:20.648777+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:21.648924+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:22.649115+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:23.649348+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:24.649504+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:25.649616+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:26.649808+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:27.649995+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:28.650114+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:29.650284+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:30.650409+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:31.650533+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:32.650707+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:33.650891+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 868352 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:34.651010+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:35.651144+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:36.651286+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:37.651475+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:38.651598+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:39.651781+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:40.651970+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:41.652078+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:42.652233+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:43.652337+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:44.652479+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:45.652630+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:46.652761+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:47.652933+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:48.653061+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:49.653212+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:50.653372+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:51.653524+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:52.653665+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:53.653856+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:54.654048+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:55.654221+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:56.654340+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:57.654500+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:58.654651+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:27:59.654785+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:00.654997+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:01.655203+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:02.655371+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:03.655512+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:04.655680+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:05.655825+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:06.656004+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:07.656225+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:08.656350+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:09.656872+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:10.657064+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:11.657234+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:12.657396+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:13.657568+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:14.657827+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:15.657993+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:16.658120+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:17.658274+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:18.658442+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:19.658594+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:20.658759+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:21.658917+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:22.659062+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:23.659207+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:24.659477+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:25.659640+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:26.659809+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:27.660101+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:28.660284+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:29.660436+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:30.660681+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:31.660859+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:32.660995+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:33.661125+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:34.661303+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:35.661468+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:36.661669+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:37.661941+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:38.662105+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:39.662273+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:40.662457+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:41.662620+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:42.662817+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:43.662978+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:44.663172+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:45.663312+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:46.663464+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:47.663783+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:48.663949+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:49.664101+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:50.664281+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:51.664551+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:52.664686+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:53.664862+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:54.665010+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:55.665183+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:56.665381+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:57.665620+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:58.665802+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:28:59.666002+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:00.666124+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:01.666258+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:02.666405+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:03.666508+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:04.666674+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:05.666783+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:06.666921+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:07.667118+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:08.667309+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:09.667516+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:10.667783+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:11.667983+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:12.668149+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:13.668306+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 983040 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:14.668494+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:15.668677+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:16.668856+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:17.669058+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:18.669204+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:19.669363+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:20.669483+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:21.669632+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:22.669767+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:23.669996+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:24.670150+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:25.670284+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:26.670467+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:27.670673+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:28.670908+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:29.671082+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:30.671281+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:31.671481+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:32.671632+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:33.671795+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013505 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:34.671959+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:35.672111+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e8afac00
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 761856 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:36.672299+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 130 handle_osd_map epochs [130,131], i have 131, src has [1,131]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 290.125671387s of 290.758483887s, submitted: 106
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fce9f000/0x0/0x4ffc00000, data 0xc22e2/0x18d000, compress 0x0/0x0/0x0, omap 0x16f6e, meta 0x2bb9092), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 933888 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:37.672513+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:38.672640+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 133 ms_handle_reset con 0x5567e8afac00 session 0x5567e8bc2000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce97000/0x0/0x4ffc00000, data 0xc5a6e/0x193000, compress 0x0/0x0/0x0, omap 0x174dc, meta 0x2bb8b24), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 933888 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024751 data_alloc: 218103808 data_used: 6692
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:39.672848+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842e000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 10043392 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:40.673015+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 133 handle_osd_map epochs [133,134], i have 134, src has [1,134]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 134 ms_handle_reset con 0x5567e842e000 session 0x5567e61616c0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:41.673216+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1f000/0x0/0x4ffc00000, data 0x539224/0x60b000, compress 0x0/0x0/0x0, omap 0x179a8, meta 0x2bb8658), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:42.673356+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1f000/0x0/0x4ffc00000, data 0x539224/0x60b000, compress 0x0/0x0/0x0, omap 0x179a8, meta 0x2bb8658), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:43.673503+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1f000/0x0/0x4ffc00000, data 0x539224/0x60b000, compress 0x0/0x0/0x0, omap 0x179a8, meta 0x2bb8658), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 10018816 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052437 data_alloc: 218103808 data_used: 7277
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:44.673655+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 10018816 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:45.673812+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 10018816 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:46.673975+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 10018816 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:47.674164+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fca1f000/0x0/0x4ffc00000, data 0x539224/0x60b000, compress 0x0/0x0/0x0, omap 0x179a8, meta 0x2bb8658), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.478294373s of 11.530800819s, submitted: 20
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 10018816 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:48.674315+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842e400
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 9871360 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:49.674470+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054453 data_alloc: 218103808 data_used: 7277
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 136 ms_handle_reset con 0x5567e842e400 session 0x5567e8a74fc0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca1d000/0x0/0x4ffc00000, data 0x53ac80/0x60d000, compress 0x0/0x0/0x0, omap 0x17cb9, meta 0x2bb8347), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 9846784 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:50.674599+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 9846784 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:51.674807+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842e800
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 9715712 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:52.674973+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 137 ms_handle_reset con 0x5567e842e800 session 0x5567e8bce000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 9830400 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca1b000/0x0/0x4ffc00000, data 0x53c84d/0x60f000, compress 0x0/0x0/0x0, omap 0x180df, meta 0x2bb7f21), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:53.675142+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 9961472 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:54.675316+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038537 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 9961472 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:55.675524+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fce85000/0x0/0x4ffc00000, data 0xcfed8/0x1a5000, compress 0x0/0x0/0x0, omap 0x187d5, meta 0x2bb782b), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:56.675763+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:57.675966+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:58.676200+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fce85000/0x0/0x4ffc00000, data 0xcfed8/0x1a5000, compress 0x0/0x0/0x0, omap 0x187d5, meta 0x2bb782b), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:29:59.676354+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041311 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fce85000/0x0/0x4ffc00000, data 0xcfed8/0x1a5000, compress 0x0/0x0/0x0, omap 0x187d5, meta 0x2bb782b), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:00.676510+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fce85000/0x0/0x4ffc00000, data 0xcfed8/0x1a5000, compress 0x0/0x0/0x0, omap 0x187d5, meta 0x2bb782b), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fce85000/0x0/0x4ffc00000, data 0xcfed8/0x1a5000, compress 0x0/0x0/0x0, omap 0x187d5, meta 0x2bb782b), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:01.676698+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:02.676902+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:03.677049+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:04.677186+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041311 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fce85000/0x0/0x4ffc00000, data 0xcfed8/0x1a5000, compress 0x0/0x0/0x0, omap 0x187d5, meta 0x2bb782b), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:05.677366+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.307020187s of 17.380622864s, submitted: 49
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce85000/0x0/0x4ffc00000, data 0xcfed8/0x1a5000, compress 0x0/0x0/0x0, omap 0x187d5, meta 0x2bb782b), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:06.677487+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:07.677652+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:08.677795+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:09.677952+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044085 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:10.678092+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd1aac/0x1a8000, compress 0x0/0x0/0x0, omap 0x18b04, meta 0x2bb74fc), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:11.678252+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:12.678384+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:13.678593+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd1aac/0x1a8000, compress 0x0/0x0/0x0, omap 0x18b04, meta 0x2bb74fc), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:14.678830+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044085 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:15.678995+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842e000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:16.679164+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 10051584 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.018523216s of 11.057118416s, submitted: 9
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 139 handle_osd_map epochs [139,140], i have 140, src has [1,140]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 140 ms_handle_reset con 0x5567e842e000 session 0x5567e8ba0540
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:17.679397+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd369c/0x1ab000, compress 0x0/0x0/0x0, omap 0x18f1c, meta 0x2bb70e4), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:18.679580+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:19.679840+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046859 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:20.679991+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd369c/0x1ab000, compress 0x0/0x0/0x0, omap 0x18f1c, meta 0x2bb70e4), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:21.680145+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd369c/0x1ab000, compress 0x0/0x0/0x0, omap 0x18f1c, meta 0x2bb70e4), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:22.680388+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:23.680584+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:24.680792+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046859 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd369c/0x1ab000, compress 0x0/0x0/0x0, omap 0x18f1c, meta 0x2bb70e4), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:25.680923+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 10035200 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:26.681036+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:27.681209+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:28.681358+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:29.681522+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:30.681789+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:31.681947+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:32.682136+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:33.682286+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:34.682410+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:35.682535+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:36.682682+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:37.682859+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:38.683007+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:39.683197+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:40.683444+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:41.683628+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:42.683784+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:43.683934+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:44.688681+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:45.688882+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:46.689086+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:47.689276+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:48.689438+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:49.689587+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:50.689765+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:51.689980+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:52.690585+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:53.690763+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:54.691171+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:55.691409+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:56.691652+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:57.691886+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:58.692060+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:30:59.692194+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:00.692328+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:01.692604+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:02.692775+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:03.692983+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:04.693119+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:05.693282+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:06.693461+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:07.693627+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:08.693789+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:09.693967+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:10.694125+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:11.694308+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:12.694441+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:13.694575+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:14.694733+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:15.694886+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:16.695007+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:17.695403+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:18.695540+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:19.695713+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:20.696023+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:21.696244+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:22.696400+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:23.696562+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:24.696759+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:25.696927+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:26.697132+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:27.697289+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:28.697464+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:29.697619+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:30.697825+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:31.697977+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:32.698141+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:33.698314+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:34.698438+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:35.698571+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:36.698772+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:37.698980+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 10166272 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:38.699106+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:39.699218+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:40.699383+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:41.699536+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:42.699819+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:43.699989+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:44.700159+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:45.700346+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:46.700479+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:47.700679+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:48.701222+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:49.701746+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:50.702042+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:51.702227+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:52.702400+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:53.702817+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:54.703080+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:55.703329+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 10158080 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:56.703640+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:57.703896+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:58.704425+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:31:59.704664+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:00.704892+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:01.705193+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:02.705504+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:03.705842+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:04.706021+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049633 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:05.706160+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:06.706428+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:07.706711+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 10149888 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd511b/0x1ae000, compress 0x0/0x0/0x0, omap 0x191a2, meta 0x2bb6e5e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:08.706907+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842f000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 111.967704773s of 112.060218811s, submitted: 22
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 10117120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:09.707125+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 142 ms_handle_reset con 0x5567e842f000 session 0x5567e6f59880
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 9953280 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056024 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:10.707366+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fce77000/0x0/0x4ffc00000, data 0xd6cfd/0x1b3000, compress 0x0/0x0/0x0, omap 0x19459, meta 0x2bb6ba7), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 9953280 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:11.707519+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 142 handle_osd_map epochs [142,143], i have 143, src has [1,143]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842f400
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 143 ms_handle_reset con 0x5567e842f400 session 0x5567e8bcefc0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:12.707763+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:13.707978+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:14.708135+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060090 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:15.708278+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xd88bc/0x1b7000, compress 0x0/0x0/0x0, omap 0x19712, meta 0x2bb68ee), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:16.708448+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xd88bc/0x1b7000, compress 0x0/0x0/0x0, omap 0x19712, meta 0x2bb68ee), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:17.708678+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:18.708844+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xd88bc/0x1b7000, compress 0x0/0x0/0x0, omap 0x19712, meta 0x2bb68ee), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:19.708966+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 9936896 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060090 data_alloc: 218103808 data_used: 7890
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842f800
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.662577629s of 11.696002960s, submitted: 18
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:20.709218+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xd88bc/0x1b7000, compress 0x0/0x0/0x0, omap 0x19712, meta 0x2bb68ee), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 143 handle_osd_map epochs [143,144], i have 144, src has [1,144]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 144 ms_handle_reset con 0x5567e842f800 session 0x5567e89ee700
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 8667136 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:21.709346+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 8667136 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:22.709462+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 8667136 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xda466/0x1b8000, compress 0x0/0x0/0x0, omap 0x19acc, meta 0x2bb6534), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:23.709584+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842fc00
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 145 ms_handle_reset con 0x5567e842fc00 session 0x5567e8a0f880
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:24.709711+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063784 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:25.709894+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:26.710028+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdc033/0x1ba000, compress 0x0/0x0/0x0, omap 0x19f00, meta 0x2bb6100), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:27.710268+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdc033/0x1ba000, compress 0x0/0x0/0x0, omap 0x19f00, meta 0x2bb6100), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:28.710375+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:29.710501+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063784 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:30.710653+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.089711189s of 10.157786369s, submitted: 42
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:31.710806+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:32.710940+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6d000/0x0/0x4ffc00000, data 0xddace/0x1bd000, compress 0x0/0x0/0x0, omap 0x1a23c, meta 0x2bb5dc4), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:33.711130+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:34.711296+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066558 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:35.711419+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:36.711651+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:37.711937+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:38.712105+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fce6a000/0x0/0x4ffc00000, data 0xdf6a2/0x1c0000, compress 0x0/0x0/0x0, omap 0x1a572, meta 0x2bb5a8e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:39.712291+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069332 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:40.712511+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:41.712811+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fce6a000/0x0/0x4ffc00000, data 0xdf6a2/0x1c0000, compress 0x0/0x0/0x0, omap 0x1a572, meta 0x2bb5a8e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:42.712983+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fce6a000/0x0/0x4ffc00000, data 0xdf6a2/0x1c0000, compress 0x0/0x0/0x0, omap 0x1a572, meta 0x2bb5a8e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:43.713128+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fce6a000/0x0/0x4ffc00000, data 0xdf6a2/0x1c0000, compress 0x0/0x0/0x0, omap 0x1a572, meta 0x2bb5a8e), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:44.713319+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 7471104 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069332 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:45.713482+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842fc00
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.128857613s of 15.221799850s, submitted: 21
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 148 ms_handle_reset con 0x5567e842fc00 session 0x5567e8ba08c0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:46.713650+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:47.713878+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:48.714085+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe1292/0x1c3000, compress 0x0/0x0/0x0, omap 0x1a95c, meta 0x2bb56a4), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:49.714237+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072106 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:50.714418+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:51.714592+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:52.714773+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:53.714900+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:54.715038+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xe1292/0x1c3000, compress 0x0/0x0/0x0, omap 0x1a95c, meta 0x2bb56a4), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072106 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:55.715233+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:56.715410+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:57.715581+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:58.715812+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:32:59.715985+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:00.716177+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:01.716342+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:02.716524+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:03.716828+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:04.717013+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:05.717167+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:06.717358+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:07.717567+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:08.717697+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:09.717853+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:10.717990+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:11.718135+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:12.718294+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:13.718489+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:14.718617+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:15.718827+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:16.718996+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:17.719196+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:18.719351+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:19.719528+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:20.719810+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:21.719991+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:22.720163+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:23.720392+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:24.720565+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:25.720758+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:26.720901+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:27.721073+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:28.721231+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:29.721352+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:30.721499+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:31.721632+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:32.721840+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:33.722039+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:34.722155+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:35.722252+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:36.722360+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:37.722541+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:38.722716+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:39.722949+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:40.723085+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:41.723198+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:42.723359+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6453 writes, 26K keys, 6453 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6453 writes, 1287 syncs, 5.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 567 writes, 1336 keys, 567 commit groups, 1.0 writes per commit group, ingest: 0.80 MB, 0.00 MB/s
                                           Interval WAL: 567 writes, 257 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:43.723593+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:44.723746+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:45.723886+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:46.724026+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:47.724213+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:48.724348+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 7462912 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:49.724556+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: mgrc ms_handle_reset ms_handle_reset con 0x5567e6b15800
Dec 09 16:44:35 compute-0 ceph-osd[86013]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/740356566
Dec 09 16:44:35 compute-0 ceph-osd[86013]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/740356566,v1:192.168.122.100:6801/740356566]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: get_auth_request con 0x5567e6add400 auth_method 0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: mgrc handle_mgr_configure stats_period=5
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 7266304 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:50.724750+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 7266304 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:51.724931+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 7266304 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:52.725106+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 7266304 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:53.725278+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 ms_handle_reset con 0x5567e854a400 session 0x5567e89ef180
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e842f000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 7266304 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:54.725450+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 7266304 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:55.725663+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 7266304 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:56.725824+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:57.726030+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:58.726157+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:33:59.726311+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:00.726448+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:01.726637+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:02.726809+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:03.726956+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:04.727097+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:05.727243+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:06.727384+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:07.727588+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:08.727884+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:09.728025+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:10.728150+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:11.728286+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:12.728455+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:13.728620+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:14.728797+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:15.728925+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:16.729066+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:17.729268+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:18.729430+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:19.729560+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:20.729770+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:21.729944+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:22.730088+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:23.730263+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:24.730437+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:25.730628+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:26.730782+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:27.730934+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:28.731040+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:29.731247+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 7258112 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:30.731440+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:31.731607+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:32.731773+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:33.731903+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:34.732149+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:35.732269+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:36.732403+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:37.732571+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:38.732732+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 7241728 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 ms_handle_reset con 0x5567e6adc400 session 0x5567e6eaac40
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e9086000
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:39.732918+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 7233536 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:40.733122+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 7233536 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:41.733290+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 7233536 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:42.733428+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 7233536 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:43.733548+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 7233536 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:44.733677+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 7233536 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce64000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:45.733845+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 7233536 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074880 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.187118530s of 120.212257385s, submitted: 23
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:46.734020+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 7200768 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:47.734231+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:48.734455+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:49.738254+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:50.738466+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:51.738601+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:52.738809+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:53.739005+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:54.739167+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:55.739386+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:56.739560+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:57.739818+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:58.739985+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:34:59.740194+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:00.740369+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:01.740581+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:02.740830+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:03.741014+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:04.741186+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:05.741435+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:06.741574+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:07.741826+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:08.742003+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:09.742232+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:10.742369+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:11.742513+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:12.742771+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:13.742964+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:14.743166+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:15.743400+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:16.743634+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:17.743846+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:18.743995+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:19.744259+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:20.744430+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:21.744593+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:22.744822+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:23.744994+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:24.745220+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:25.745437+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:26.745610+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:27.745823+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:28.745988+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:29.746146+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:30.746283+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:31.746427+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:32.746560+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:33.747063+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:34.747330+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:35.747497+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:36.747665+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:37.747867+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:38.748007+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:39.748179+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:40.748378+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:41.748648+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:42.748819+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:43.748975+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:44.749160+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:45.749303+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:46.749491+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:47.749779+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:48.750007+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:49.750183+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:50.750333+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:51.750531+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:52.750765+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:53.751014+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:54.751230+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:55.751480+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:56.751690+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:57.751994+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:58.752227+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:35:59.752466+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:00.752803+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:01.753021+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:02.753278+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:03.753524+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:04.753702+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:05.754106+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:06.754274+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:07.754585+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:08.754781+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:09.754969+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:10.755184+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:11.755384+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:12.755565+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:13.755799+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:14.756014+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:15.756188+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:16.756385+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:17.756626+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:18.756834+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:19.757004+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:20.757180+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:21.757373+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:22.757583+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:23.757846+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:24.757999+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:25.758187+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:26.758351+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:27.758597+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:28.758810+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:29.758979+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:30.759107+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:31.759275+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:32.759448+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:33.759590+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:34.759798+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:35.759930+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:36.760096+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:37.760243+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:38.760459+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:39.760618+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:40.760827+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:41.761003+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:42.761130+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:43.761310+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:44.761601+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:45.761853+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:46.762035+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:47.762226+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:48.762422+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:49.762657+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:50.762949+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:51.763158+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:52.763349+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:53.763572+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:54.763854+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:55.764076+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:56.764293+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:57.764498+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:58.764674+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:36:59.764912+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:00.765139+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:01.765439+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:02.765617+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:03.765839+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:04.766046+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:05.766274+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:06.766483+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:07.766754+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:08.766908+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-mon[75222]: from='client.14724 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:35 compute-0 ceph-mon[75222]: pgmap v1377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:35 compute-0 ceph-mon[75222]: from='client.14726 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:35 compute-0 ceph-mon[75222]: from='client.14728 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:35 compute-0 ceph-mon[75222]: from='client.14730 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:35 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:09.767066+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:10.767214+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:11.767338+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:12.767472+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:13.767632+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:14.767780+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:15.767966+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:16.768115+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:17.768332+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:18.768532+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:19.768908+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:20.769036+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:21.769267+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:22.769431+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:23.769677+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:24.769808+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:25.769988+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:26.770147+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:27.770342+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:28.770497+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:29.770706+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:30.770930+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:31.771130+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:32.771332+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:33.771590+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:34.771813+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:35.772121+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:36.772386+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:37.772669+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:38.772877+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:39.773162+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:40.773472+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:41.773762+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:42.774036+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:43.774314+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:44.774578+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:45.774838+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:46.775042+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:47.775349+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:48.775588+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 6021120 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:49.775803+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:50.775971+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:51.776122+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:52.776251+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:53.776397+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:54.776519+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:55.776640+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:56.776817+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:57.776966+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:58.777084+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:37:59.777224+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:00.777386+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:01.777564+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:02.777767+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:03.777944+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:04.778096+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:05.778246+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:06.778391+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:07.778549+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:08.778784+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:09.778973+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:10.779212+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:11.779399+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:12.779607+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:13.779797+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:14.780005+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:15.780303+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:16.780521+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:17.780909+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:18.781147+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:19.781416+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:20.781623+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:21.781911+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:22.782102+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:23.782392+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:24.782614+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:25.782809+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:26.783068+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:27.783378+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:28.783523+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:29.783672+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:30.783915+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:31.784115+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:32.784250+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:33.784404+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:34.784568+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:35.784741+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:36.784941+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:37.785145+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:38.785336+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:39.785556+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:40.785754+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:41.785889+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:42.786093+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:43.786260+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:44.786444+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:45.786656+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:46.786866+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:47.787362+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:48.787597+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:49.787791+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:50.788191+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:51.788396+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:52.788827+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:53.789195+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:54.789526+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:55.789834+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:56.789949+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:57.790103+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:58.790229+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:38:59.790416+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 6004736 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:00.790536+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:01.790781+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:02.791012+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:03.791266+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:04.791408+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:05.791842+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:06.792018+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:07.792236+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:08.792411+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:09.792627+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:10.792769+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:11.792998+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:12.793144+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:13.793435+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:14.793623+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:15.793782+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:16.793936+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:17.794140+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:18.794284+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:19.794472+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:20.794643+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:21.794890+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:22.795050+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:23.795201+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:24.795348+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:25.795566+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:26.795714+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:27.795974+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:28.796118+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:29.796315+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:30.796485+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:31.796698+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:32.796973+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:33.797153+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:34.797340+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:35.797490+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:36.797660+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:37.797825+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:38.798052+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:39.798269+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:40.798409+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:41.798625+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:42.798848+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:43.799049+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:44.799231+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:45.799406+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:46.799568+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:47.799799+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:48.799936+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:49.800125+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:50.800259+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:51.800532+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:52.800679+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:53.800816+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:54.801011+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:55.801159+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:56.801317+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:57.801542+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:58.801770+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:39:59.801948+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:00.802303+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:01.802503+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:02.802633+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:03.802782+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:04.802942+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:05.803089+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:06.803257+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:07.803487+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:08.803763+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:09.803975+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:10.804164+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:11.804397+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:12.804572+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:13.804809+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:14.804984+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:15.805146+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:16.805329+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:17.805545+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:18.805677+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:19.805812+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:20.805971+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:21.806086+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:22.806310+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:23.806593+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:24.806838+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:25.807088+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:26.807356+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:27.807664+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:28.807843+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:29.808099+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:30.808283+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:31.808460+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:32.808587+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:33.808815+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:34.808999+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:35.809160+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:36.809393+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:37.809637+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:38.809862+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:39.810036+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:40.810182+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:41.810354+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:42.810587+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:43.810800+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:44.810920+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:45.811095+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:46.811309+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:47.811563+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:48.811704+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:49.811917+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:50.812114+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:51.812331+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:52.812580+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:53.812803+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:54.812972+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:55.813088+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:56.813255+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074160 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:57.813454+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xe2d11/0x1c6000, compress 0x0/0x0/0x0, omap 0x1ac8a, meta 0x2bb5376), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:58.813609+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:40:59.813712+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 5988352 heap: 86818816 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e9086400
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 374.183349609s of 374.374053955s, submitted: 124
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:00.813874+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 10600448 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:01.814036+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 ms_handle_reset con 0x5567e9086400 session 0x5567e89efc00
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:02.814203+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:03.814397+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:04.814548+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:05.814697+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:06.814828+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:07.815055+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:08.815195+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:09.815346+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:10.815495+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:11.815624+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:12.815800+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:13.816004+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:14.816141+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:15.816275+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:16.816393+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:17.816546+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:18.816700+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:19.816900+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:20.817096+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:21.817224+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:22.817368+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:23.817602+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:24.817815+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:25.818017+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:26.818189+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:27.818376+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:28.818561+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:29.818821+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:30.819025+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:31.819242+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:32.819494+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:33.819643+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:34.819777+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:35.819947+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:36.820098+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:37.820301+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:38.820439+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:39.820654+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:40.820822+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:41.820974+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:42.821168+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:43.821320+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:44.821461+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:45.821666+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:46.821804+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:47.821985+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:48.822144+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:49.822385+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:50.822571+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:51.822759+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:52.822955+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:53.823114+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:54.823266+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:55.823462+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:56.823626+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:57.823857+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:58.823990+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:41:59.824162+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:00.824358+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:01.824548+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:02.824682+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:03.824842+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:04.825040+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:05.825319+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:06.825477+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:07.825655+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:08.825792+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:09.825947+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:10.826108+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:11.826254+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:12.826380+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:13.826497+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:14.826625+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:15.826778+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:16.826910+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102863 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:17.827081+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:18.827177+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:19.827271+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 10575872 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:20.827431+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: handle_auth_request added challenge on 0x5567e9086800
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 80.719802856s of 80.752861023s, submitted: 9
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 10567680 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:21.827659+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 150 handle_osd_map epochs [150,151], i have 151, src has [1,151]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fc9f1000/0x0/0x4ffc00000, data 0x5548ad/0x639000, compress 0x0/0x0/0x0, omap 0x1b0ab, meta 0x2bb4f55), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 151 ms_handle_reset con 0x5567e9086800 session 0x5567e8ba1500
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 10543104 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082557 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:22.827835+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 10543104 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:23.828011+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 10543104 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:24.828179+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 10543104 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:25.828362+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 10543104 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:26.828531+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 10543104 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082557 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:27.828752+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe649d/0x1cc000, compress 0x0/0x0/0x0, omap 0x1b991, meta 0x2bb466f), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5e000/0x0/0x4ffc00000, data 0xe649d/0x1cc000, compress 0x0/0x0/0x0, omap 0x1b991, meta 0x2bb466f), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 10543104 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:28.828915+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 10543104 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:29.829062+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:30.829229+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:31.829448+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:32.829625+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:33.829807+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:34.830052+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:35.830324+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:36.831162+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:37.831475+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:38.831626+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:39.831801+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _renew_subs
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:40.831990+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:41.832160+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:42.832334+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:43.832453+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:44.832634+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:45.832878+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:46.833123+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:47.833321+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:48.833463+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 10518528 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:49.833691+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:50.833948+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:51.834104+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:52.834329+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:53.834521+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:54.834766+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:55.834984+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:56.835201+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:57.836109+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:58.836305+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:42:59.836616+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:00.836805+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:01.836998+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:02.838191+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:03.839261+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:04.840056+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:05.840783+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:06.841287+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:07.841687+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:08.842010+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 10502144 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:09.843062+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:10.843839+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:11.844544+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:12.844771+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:13.845125+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:14.845403+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:15.845703+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:16.846164+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:17.846470+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:18.846647+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:19.846839+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:20.847043+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:21.847169+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:22.847442+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:23.847753+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:24.848029+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:25.848550+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:26.848920+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:27.849395+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:28.849827+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:29.850265+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 10485760 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:30.850601+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:31.850891+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:32.851184+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:33.851440+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:34.851771+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:35.853542+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:36.854274+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:37.855499+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:38.855764+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:39.855984+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:40.856178+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:41.856343+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:42.856641+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6805 writes, 26K keys, 6805 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6805 writes, 1459 syncs, 4.66 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 352 writes, 673 keys, 352 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 352 writes, 172 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:43.856927+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:44.857126+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:45.857339+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:46.857627+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:47.857848+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:48.857989+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:49.858146+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:50.858261+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:51.858396+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:52.858535+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:53.858662+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:54.858914+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:55.859023+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:56.859128+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:57.859288+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:58.859407+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:43:59.859528+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:44:00.859636+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 10469376 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:44:01.859793+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 10346496 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'config diff' '{prefix=config diff}'
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'config show' '{prefix=config show}'
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:44:02.859928+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'counter dump' '{prefix=counter dump}'
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 10149888 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'counter schema' '{prefix=counter schema}'
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 09 16:44:35 compute-0 ceph-osd[86013]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 09 16:44:35 compute-0 ceph-osd[86013]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085331 data_alloc: 218103808 data_used: 11951
Dec 09 16:44:35 compute-0 ceph-osd[86013]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe7f1c/0x1cf000, compress 0x0/0x0/0x0, omap 0x1bced, meta 0x2bb4313), peers [1,2] op hist [])
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:44:03.860058+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 9879552 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: tick
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_tickets
Dec 09 16:44:35 compute-0 ceph-osd[86013]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-09T16:44:04.860222+0000)
Dec 09 16:44:35 compute-0 ceph-osd[86013]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 9576448 heap: 91480064 old mem: 2845415832 new mem: 2845415832
Dec 09 16:44:35 compute-0 ceph-osd[86013]: do_command 'log dump' '{prefix=log dump}'
Dec 09 16:44:35 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14734 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:35 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} v 0)
Dec 09 16:44:35 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:44:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 09 16:44:36 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2415834710' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14738 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:36 compute-0 ceph-mon[75222]: from='client.14734 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:36 compute-0 ceph-mon[75222]: from='mgr.14124 192.168.122.100:0/2022010261' entity='mgr.compute-0.ysegzv' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.efuxpz", "name": "rgw_frontends"} : dispatch
Dec 09 16:44:36 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2415834710' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 09 16:44:36 compute-0 ceph-mon[75222]: from='client.14738 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14742 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:36 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 09 16:44:36 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150515734' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 9.146598025505342e-07 of space, bias 1.0, pg target 0.0002743979407651603 quantized to 32 (current 32)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8061872431253358e-06 of space, bias 4.0, pg target 0.002167424691750403 quantized to 16 (current 16)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 16:44:36 compute-0 ceph-mgr[75515]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 09 16:44:37 compute-0 nova_compute[243452]: 2025-12-09 16:44:37.308 243461 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 5.41 sec
Dec 09 16:44:37 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14744 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 09 16:44:37 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751090771' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 09 16:44:37 compute-0 ceph-mon[75222]: from='client.14742 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:37 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/4150515734' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 09 16:44:37 compute-0 ceph-mon[75222]: pgmap v1378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:37 compute-0 ceph-mon[75222]: from='client.14744 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 09 16:44:37 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/751090771' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 09 16:44:37 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 09 16:44:37 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176107148' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 09 16:44:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 09 16:44:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 09 16:44:38 compute-0 systemd[1]: Starting Hostname Service...
Dec 09 16:44:38 compute-0 systemd[1]: Started Hostname Service.
Dec 09 16:44:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 09 16:44:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 09 16:44:38 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/176107148' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 09 16:44:38 compute-0 ceph-mon[75222]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 09 16:44:38 compute-0 ceph-mon[75222]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 09 16:44:38 compute-0 ceph-mon[75222]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 09 16:44:38 compute-0 ceph-mon[75222]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 09 16:44:38 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 09 16:44:38 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634142899' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 09 16:44:38 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:39 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14760 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:39 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/634142899' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 09 16:44:39 compute-0 ceph-mon[75222]: pgmap v1379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:39 compute-0 ceph-mon[75222]: from='client.14760 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:39 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 09 16:44:39 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246418579' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 09 16:44:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 09 16:44:40 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309356976' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 09 16:44:40 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3246418579' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 09 16:44:40 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2309356976' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 09 16:44:40 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 09 16:44:40 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185915436' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 09 16:44:40 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 09 16:44:41 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 09 16:44:41 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539677666' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 09 16:44:41 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2185915436' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 09 16:44:41 compute-0 ceph-mon[75222]: pgmap v1380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:41 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/539677666' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 09 16:44:41 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14770 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:41 compute-0 podman[266462]: 2025-12-09 16:44:41.76384783 +0000 UTC m=+0.081322047 container health_status 84d676632bb4f080a989766472a2cc2fcf267803021f3bcdf51f0d8bfc9055e2 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 16:44:42 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 09 16:44:42 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313415632' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 09 16:44:42 compute-0 ceph-mon[75222]: from='client.14770 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:42 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/2313415632' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 09 16:44:42 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 09 16:44:42 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477555539' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 09 16:44:42 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:43 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/3477555539' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 09 16:44:43 compute-0 ceph-mon[75222]: pgmap v1381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:43 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14776 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:44 compute-0 ceph-mon[75222]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 09 16:44:44 compute-0 ceph-mon[75222]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621956775' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 09 16:44:44 compute-0 ceph-mon[75222]: from='client.14776 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:44 compute-0 ceph-mon[75222]: from='client.? 192.168.122.100:0/621956775' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 09 16:44:44 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14780 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 16:44:44 compute-0 ceph-mgr[75515]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 09 16:44:45 compute-0 ceph-mgr[75515]: log_channel(audit) log [DBG] : from='client.14782 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
